As others have said, ChatGPT is great for writing fluff content that has no right or wrong answer. But it is still weak when a correct answer is needed, like in legal analysis. It can write a great 10 page summary of the history of the use of strawberries. But when it comes to telling how many r's are in the word strawberry, it's not very trustworthy.
I wonder if most people realize that your observation is a fundamental problem with LLMS. LLMs simply have no means to evaluate factuality. Keep asking ChatGPT "Are you sure?" and it will break eventually.
The inability to answer basic facts should be a dealbreaker.