Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They remix and rewrite what they know. There's no invention, just recall...

If they only recalled they wouldn’t “hallucinate”. What’s a lie if not an invention? So clearly they can come up with data that they weren’t trained on, for better or worse.



Because internally, there isn't a difference between correctly "recalled" token and incorrectly (hallucinated).


Depends on the training? If there was eg RLHF then those connections are stronger and more likely; that's a difference (but not a category difference).


Yes, but I thought we're talking about category difference.

Proper RLHF surely boosts "predicted next token until it couldn't" to feel more like "actually recalled".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: