I guess answering "you obviously didn't write it, please redo" is not an option, because then you are the dinosaur hindering company's march towards the AI future?
You might make this easier by saying you just checked their code with your own AI system and then say it returned "you obviously didn't write it, please redo".
Honestly, I don't think it matters who wrote it; ultimately it's about the code and the product, not the individual author.
That said, a lazy contribution - substandard code or poorly LLM generated - just wastes your time if your feedback is just put into the LLM again. Setting boundaries then is perfectly acceptable, but this isn't unique to LLMs.