Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

4 months ago I tried every dedicated NSFW-image-classifier model I could find on HuggingFace or GitHub. They have a high false positive rate on certain kinds of benign content, like close up photographs of hands with painted fingernails, and a high false negative rate on artistic nude photographs. I even tried combining multiple models with gradient boosting but the accuracy barely improved; maybe everyone is training with very similar data sets. At this point I should train my own model but I was hoping to find something capable off-the-shelf, since content moderation is such a common task.


You can just finetune an open model instead of starting from scratch... that's the point of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: