Seems impossible to enforce, but I applaud the spirit of it.
Pretty soon anyone looking to add "open source contributor" to their GH profile can take a Gentoo issue and ask an AI to cook up a solution, put that on a PR, and send it in.
This will be a nightmare for maintainers. I'm not sure if there is a solution, since AI usage will spread regardless of how good/accurate it is and there's no way for us to differentiate between plausible bullshit and actual contributions, without reading it carefully. Reputation of contributors is probably the best proxy for genuine contributions, but that's a catch 22, so it can't be the only way.
As a maintainer of a Very Large open source project, I have not found this to be the case. I have not found that AI/LLM tools have generated noticably more noise. The occasional low-effort PR existed before ChatGPT and Copilot, and it continues to exist after it. 'Banning AI' does not absolve your responsibility to review PRs, nor do I believe does it make your job actually easier.
I believe I've noticed only one 'LLM spam' comment on an issue needlessly comparing different Javascript package mangers.
To be clear, I don't think this is true right now. But if the technology does improve just a bit further, it will be easy enough to be spammable and it will be abused.
Pretty soon anyone looking to add "open source contributor" to their GH profile can take a Gentoo issue and ask an AI to cook up a solution, put that on a PR, and send it in.
This will be a nightmare for maintainers. I'm not sure if there is a solution, since AI usage will spread regardless of how good/accurate it is and there's no way for us to differentiate between plausible bullshit and actual contributions, without reading it carefully. Reputation of contributors is probably the best proxy for genuine contributions, but that's a catch 22, so it can't be the only way.