Content moderation extends to plenty of sub-problems other than spam. A lot of use cases need detection of different types of online harm, for example (bullying, hate speech...etc). A lot of these cases can be improved by training classifiers based on language models that better understand the context and complexities of language.