Hacker News new | past | comments | ask | show | jobs | submit login

Content moderation extends to plenty of sub-problems other than spam. A lot of use cases need detection of different types of online harm, for example (bullying, hate speech...etc). A lot of these cases can be improved by training classifiers based on language models that better understand the context and complexities of language.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: