Seeing how poorly ML works for moderation (Too many false positives), I don't think it belongs anywhere near it.
The problem is that you could offer a user a path to request a human review moderation action taken by ML, but bad actors that knowingly break rules will just request human review and at that point, the ML is worthless.
Seeing how poorly ML works for moderation (Too many false positives), I don't think it belongs anywhere near it.
The problem is that you could offer a user a path to request a human review moderation action taken by ML, but bad actors that knowingly break rules will just request human review and at that point, the ML is worthless.