This is something I have noticed with a lot of models. I'm not sure what the technical term for it is, but when there is a repeated sequence of human input with model generation following (like a chatbot) it seems to be unable to focus. When you prod it to regain focus and come back to the topic being discussed it starts making up lies.
If you use GPT3 for a large amount of content generation the issue of focus doesn't seem to be so prevalent, but it has zero guarantee of truth.
If you use GPT3 for a large amount of content generation the issue of focus doesn't seem to be so prevalent, but it has zero guarantee of truth.