Part of the solution is pushing back when you spot tons of obvious lazy LLM errors instead of fixing them yourself. Otherwise there's not much incentive for them to improve their effort.
Yes I've tried to have an internal standard for AI usage: at least the PR people have to tell us if they use AI. It completely changes how we approach editing of a text AI-written vs human-written (humans don't hallucinate citations, for a start).
Of course this is impossible to enforce, and I believe that the PR people would rather hide their AI usage. (As I wrote above why pay high salaries to people who automate themselves away?)