Sort of. But if I read a five page article by a human, and everything on the first couple pages checks out as correct, I expect the rest of the article to be at least reasonable. If it’s written by AI, it’s entirely possible that it’ll start in on something that doesn’t even make sense. Then I’ll realize that I can’t even trust the part that seemed correct.