No, I don't think you can trust AI to answer correctly, ever. I've seen it confidently hallucinate, so I would always check what it says against other, more static, sources. The same if I'm reading from an author who includes a lot of mistakes in his books: I might still find them interesting and usefull, but I will want to double-check the key facts before I quote them to others.
Saying this is no different than saying you can't trust computers, ever, because they were (very) unreliable in the 50s and early 60s. We've been doing "good" generative AI for around 5 years, there is still much to improve until it reaches the reliability of other information sources like Wikipedia and Britannica.
No, you should not trust AI to answer truthfully about anything. It often will, but it is well known that LLMs hallucinate. Verify all facts. In all things, really, but especially from AI.
Any other specific things we should not expect from AI or shouldn't ask AI to do?