Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is something I have noticed with a lot of models. I'm not sure what the technical term for it is, but when there is a repeated sequence of human input with model generation following (like a chatbot) it seems to be unable to focus. When you prod it to regain focus and come back to the topic being discussed it starts making up lies.

If you use GPT3 for a large amount of content generation the issue of focus doesn't seem to be so prevalent, but it has zero guarantee of truth.



It's unable to focus because you can only feedback so much of the ongoing transcript before it exceeds the prompt input size limitations.

So having a conversation with even some of the best GPT models is going to be like having a conversation with the protagonist from Momento.


Interestingly the source code for the frontend has Chinese comments all over.


Did Blake Lemoine try this with LaMDA?


Difficult not to run into this issue, he even admitted the chat logs were not verbatim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: