We already have AI moderation at places like Facebook and they still need a small army of moderators. Zuckerburg has been trying to automate moderation for years.
Something like GPT 4 didn't exist 5, 10 years ago, re Zuckerberg. It won't take much further improvement for eg OpenAI's models to effectively handle moderation based on learning from instruction for a given community's moderation needs.
There's no scenario where the AI moderators don't become superior to human moderators in a broad sense (only in select, very narrow cases will humans still be better at it). That's one of the easier areas for AI to wipe out in terms of human activity.
This isn't then. The game has changed, permanently and dramatically. Anybody calculating what I'm saying based on GPT 3.5 or 4 is doing it very wrong. That's a comical failure of not looking ahead at all. Look at how far OpenAI has come in just a few years. Progress in that realm is not going to stop anytime soon.
Nvidia is unleashing some extraordinary GPU systems at the datacenter level that will quite clearly enable further leaps in LLMs, and they'll mostly trivially handle moderation tasks.
Pretty arrogant to call it a comical failure when OpenAI is openly stating they're not training GPT 5. I think people who think we can do much better than GPT 4 without insane cost scaling and a nuts amount of data (that's going to get harder to collect when regulations start to kick in), probably don't know what they're talking about. We're deep into diminishing returns here.
I do think we can do much better than GPT4. Size isn't everything. Small models can outperform large models, when trained and finetuned in the right way. And transformers are hardly the be-all and end-all of language AI either - there's plenty of reason to believe they're both inefficient and architecturally unsuited for some of the tasks we expect of it. The field is brand new, and now that the ChatGPT has brought the world's figurative Eye Of Sauron on developing human-level AI, we're going to see a lot of progress very quickly.
Yeah, but if the tech keeps improving at the rate it has been then more and more of it is going to get automated. And tbh, that’s a good thing. Moderating is a horrible job that frequently causes mental health issues for the people who have to do it.