Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The idea that 10% of tweets are reported is a huge over estimate. I'd say at most 1% of tweets are reported, and it's probably more like 0.1%.

Twitters actual numbers (from https://transparency.twitter.com/en/reports/rules-enforcemen...) show that 11.6m reports were generated in the period July to December 2001, which is roughly 65,000 reports per day. ML could easily reduce this number further, but with 100 employees doing moderation, even without ML, that's 650 reports per day. That's getting towards doable.



Are those employees able to speak all the world's languages?


> ML could easily reduce this number further

Seeing how poorly ML works for moderation (Too many false positives), I don't think it belongs anywhere near it.

The problem is that you could offer a user a path to request a human review moderation action taken by ML, but bad actors that knowingly break rules will just request human review and at that point, the ML is worthless.


10% would also include automatically reviewed tweets for misinformation, covid, and any other language filters they have in place.

Not all reviewed-for-moderation tweets would happen because of user reports.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: