So in reality it would probably end up as a faster way to reject unhelpful or harmful uses of AI. If you manage to auto-generate correct package descriptions (maybe through human review) nobody has a reason to complain, even if you overuse the word "delve" a bit.
At the same time, if you produce text and code that reads overly like a bot, they may have just cause to dismiss your submission and maybe even ban you, if we're being so teleological about it.
I don't personally agree this particular line in the sand will help in all cases -- it is a difficult standard determining whether something is AI-created, this will likely increase the burden on the humans in the loop. But as policies go, it makes sense to have a line drawn in the sand for outright rejecting it on source not content, especially in the context of a package manager and Linux distribution. The burden on said humans in the loop will be even greater if they don't have a rule in place granting blanket dismissal on this characteristic, especially if they're correct in seeing an increase in AI-produced packaging of unknown binaries.