Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Putting anything to a public environment, always assume an actively hostile environment. No matter how many well-meaning users you may have; if it's more then a handful, there will always be enough jerks who would try to ruin the show for everyone.

To my mind, premoderation is the way. Any new user's submissions go to the premoderation queue for review, not otherwise visible. Noise and spam can be rejected automatically. More underhanded stuff gets a manual review. All rejections are silent, except for the rare occasion of a legitimate but naive user making an honest mistake.

What's passed gets published. Users who passed premoderation without issues for, say, 10 times, skip the human review step, given that they've passed automatic filters, so they can talk without any perceptible delay. The most trusted of them even get the privilege to do the human review step themselves %)



One thing I wish the tech companies would do is to use LLMs for junk moderation. At least to flag potential junk.

Meta uses they LLMs to summarize comments already and can do this, yet they choose to allow obvious crypto scammers, T-shirt scams, “hey add me comments”.

A simple LLM prompt of “is this post possibly a scam”, especially for new accounts, would do wonders. GitHub could likely do it too.


Just a matter of time before we get newsletters selling guides to earning $50,000 weekly by finetunning your L-MO business

L-MO = Language Model Optimized




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: