Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This highlights a need in the market for tools to help moderate a community. Imagine a tool that automatically detects hate speech and either auto-deletes or brings it to a moderator’s attention. Certain communities are being highjacked by extremist, racist, and simply malicious actors. The current method of reading chat and banning users doesn’t scale when sudden growth occurs.

If effective moderation can occur at smaller levels, like discord channels or subreddits, then those communities won’t been to be removed by the larger platform. This would also be helpful for startup social media platforms that have yet to bring enough revenue to afford a facebook sized moderation team.

Technically speaking it can do things like:

* Flag posts that contain a blacklist of words, including non-obvious spellings of said word (using non-standard Unicode characters in place of letters)

* Cross reference IP addresses or user names with banned users in other communities

* Notify moderators of trending slogans, phrases, or hashtags that have non-obvious extremist roots.

* Identify images containing extremist/hateful content

* Flag content that contains any political discussion for communities that want to be completely apolitical.

* Flag pornography

EDIT: To be clear, the target audience for this would be community moderators/admins or startup social networks that haven’t built their own moderation infrastructure.



This sounds dystopian to me. I already can't stand Google Docs grammar checker trying to re-write my sentences to match some AI's idea of a correct sentence. Sure, when it actually detects a mistake I'm happy but 20% of the time it's just wrong and feels like it's trying to take my personality out of my writing.

Your suggestion sounds like a step to toward the Black Mirror Season 2 "White Christmas" episode where the main character gets ban from all social interaction (not just online) forever until death


I think the current situation where a herd of deplatformed white supremacist and conspiracy theorist cultist can hijack your community is dystopian. Leaders in a community should be given the power to determine who is and isn’t a part of it.


Some of this exists, and both Quora and Facebook (among others) use it extensively. Both hate speech and porn are good targets for machine learning. It needs supervision, but it can take a lot of load off human moderators.

Open source implementations exist, e.g.:

https://github.com/t-davidson/hate-speech-and-offensive-lang...

I suspect more message board will want to start applying these sooner rather than later. Most have already figured out that they need anti-spam tools, rather than it coming as a surprise when they roll things out and it fills up with bots. The technology is similar.

You mention being able to share that information across boards, and I don't know of any widespread implementation of that. You can, at least, let somebody else handle your authentication, which slightly slows their ability to create new accounts when you blacklist one. I'd like to see those sites distinguish "aged" accounts, so that it at least takes some effort or cost to use a new account.


East German commies would be proud to have you.

Nothing says dictatorship better than having a ill-defined concept such as hate-speech combines with the power of the ban hammer.

Please not that if you downvote this comment you will be producing hate speech against me and you WilL bE rePoRteD tO tHE LoCOaL AUtHoRItHIEs!!1


I downvoted you if you would like to report me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: