Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't understand the "coincidentally" argument.

Nothing is coincidental about those models. They were designed after processes in the brain. They underwent rigorous training to generate a function that probabilistically maps inputs to outputs. Eventually, it exceeded the threshold where most humans consider it to be intelligent. As these models grow larger, they will surpass human intelligence by far. Currently, large language models (LLMs) have fewer weights than human brains, with a difference of a factor in the thousands (based on my superficial research). But what happens when they have an equal or even 100,000 times more weights? These models will be able to model reality in ways humans cannot. Complex concepts like the connection between time and space, which are difficult for humans to grasp, will be easily understood by such models.

> LLMs do not hallucinate sometimes. They hallucinate all the time, it just is a coincident that sometimes these autocompletion of Tokens aligns with the reality. Just by chance, not by craft.

That is such a weird way to think about them. I'd rather say, they always provide the answer that is most probabilistic according to their internal model. Hallucination simply means, that the internal model is not good enough yet and needs to be improved, which it will.



Heh, another one I see

"LLMs don't create anything new" and "LLMs hallucinate all the time"

I want to ask those people which one is the correct sentence as they appear to be in conflict with each other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: