This particular person seems to be using LLMs for code review, not generation. I agree that the problem is compounded if you use an LLM (esp. the same model) on both sides. However, it seems reasonable and useful to use it as an adjunct to other forms of testing, though not necessarily a replacement for them. Though again, the degree to which it can be a replacement is a function of the level of the technology, and it is currently at the level where it can probably replace some traditional testing methods, though it's hard to know which, ex-ante.
edit: of course, maybe that means we need a meta-suite, that uses a different LLM to tell you which tests you should write yourself and which tests you can safely leave to LLM review.
Indeed the idea of a meta LLM, or some sort of clear distinction between manual and automated-but-questionable tests makes sense. So what bothers me is that does not seem to be the approach most people take: code produced by the LLM is treated the same as code produces by human authors.
edit: of course, maybe that means we need a meta-suite, that uses a different LLM to tell you which tests you should write yourself and which tests you can safely leave to LLM review.