I wonder if most people realize that your observation is a fundamental problem with LLMS. LLMs simply have no means to evaluate factuality. Keep asking ChatGPT "Are you sure?" and it will break eventually.
The inability to answer basic facts should be a dealbreaker.
The inability to answer basic facts should be a dealbreaker.