AI can't create good (meaning truthful) content. It's literally impossible for LLMs to be hallucination free. It's just not how they work.
The problem is going to get worse as hallucinations are used as training data because even the AI companies can't tell the difference between AI content and human content.
The problem is going to get worse as hallucinations are used as training data because even the AI companies can't tell the difference between AI content and human content.