I was afraid that mentorship was the point where human-to-human interaction is needed most.
I suspect what you're getting at is that "mentorship" is really code for using AI to step in when people are making the wrong kind of changes to a Wikipedia page. (IE, introducing bias, promoting products, edit wars, ect.)
I wonder too if it could be used to help the edit reviewing process, but I can imagine it runs a risk of becoming an accountability sink[^1] if reviewers can merely defer their judgement to what the bot says is OK to purge. It might have a chilling effect on edits if everyone, including procedure hawks, can rely on it in that way. I'm not enough of a contributor there to know.
I suspect what you're getting at is that "mentorship" is really code for using AI to step in when people are making the wrong kind of changes to a Wikipedia page. (IE, introducing bias, promoting products, edit wars, ect.)
I'm curious to see how this plays out.