Hacker News new | past | comments | ask | show | jobs | submit | miraculixx's comments login

To fly means "to soar through air; move through the air with wings" (etymonline)

That is pretty much an accurate discription of what planes and birds do.

To plan means "to reason with intent".

That is very much not what LLMs do, and the paper does not provide evidence to the contrary. Yet it uses the term to give credence to it's rather speculative interpretation of observed correlation as causation.

Interestingly enough there is no definition of the term, which at least would help to understand what the authors actually mean.

I would be more inclined to take a more positive stance to the paper if it used more appropriate terms, such as call observed correlations just that. Granted that would possibly make for much less of a fancy title.


Very much in support of this. The use of anthropmorphic or even biological terms are entirely misguided. All they do is drive a narrative that is very much belitting natural intelligence.


Unfortunately that's not a review but a hype-driving oversimplification of what the paper says.


"Hallucination" is just to term we use to say "this result is not what it should be". The model always uses the very same process, it does not do one thing for "hallucinations" and something else for "correct" results.

In a nutshell it is always predicting the next token from a joint probability distribution. That's it.

All other interpretations are speculative.


> This doesn't seem to happen very often in classical programming, does it?

Try concurrent programming. It happens all the time.


We know how they work. It's just that it works better than expected. Which of course doesn't mean we don't know, it just means there are second-order effects that are non-obvious. Intelligence is not implied.


There is no evidence to this end. There is lots of claims, but claims are not evidence.


Attention is all you need.


Ok there is correlation. But is there causation?


Let's note that the label you assign this feature is entirely speculative, i.e. it is your interpretation, not something the model actually "knows".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: