Could you elaborate what you consider "properly dealing with it" in this context? They are definitely not ignoring it, as they take extra steps to make it not produce it.
As they try to stop it from producing it, I assumed that critizing this filtering means someone wants the AI to produce those things.
The context is probably what's throwing you off then. I took bilsbie's comment to be a context expansion here. Imagining how this general strategy of censoring what's actually out there plays out in the long run and wider context of how all of humanity deals or doesn't deal with what human minds are actually capable of producing.
Also note that this wouldn't just be about a bot producing more bad stuff. ChatGPT also, or even mostly, answers questions and provides information. If bots like this do not even know about the filtered out content they cannot, for example, give an accurate or original answer to a question like: "Is nazi-ism on the rise?"
The best they can do is regurgitate news posts, pundit commentary, and existing research articles. That might be good enough, but if a bot were able to see all the content out there it might even give a statistical summary of everything on the web and comment on the rate at which pro-nazi web articles are published compared to articles of all other types. That would be a useful way to shed some daylight on the issue.
If most of our efforts most of the time are spent merely filtering out the trash, then we're still just ignoring pretty big issues. The strategy as applied here isn't exactly wrong for what OpenAI is trying to achieve but it still ends up being another example of people being more willing to spend a lot of resources pretending humans are better than they are than we are willing to spend a lot of resources fixing the underlying issues that make some people so screwed up to begin with.
This new ml model they are building will be used to filter out child porn, among other things, from the output of ChatGPT, so users of that bot don't see it. It seems their ideal case is to reliably filter it from the input as well, so even the bot doesn't know about it. Which is fine for what they're doing, so they'll probably stop there.
However, once you have something like a reliable detector for human misery and the warped human minds the produce it you're actually well on your way to developing a tool chain that could help authorities and mental health professionals identify and track down those warped mind as soon as or possibly before they cause harm.
I'm not confident I know the whole field well enough to say no one is doing something like that but it does seem like the preferred option for most all AI teams is to spend $200k filtering out the problem rather than taking the time to build something that reduces or eliminates the root cause. And that stance is reasonable enough at the team/company levels because you'd have to do the former anyway even if you also do the latter.
So of course it's not OpenAI's job to do that and this isn't a moralistic judgement on the teams building AI in general either. Humanity just isn't interested in solving those problems for real in the same way we are interested in a bot that can write code or tell us interesting stories. We want to know the police are on it in some capacity, but there's not likely to be enough investment in it to actually solve the problems any time soon.
In the long run, being increasingly good at filtering this stuff out tends toward a future where most people can live their whole lives without ever becoming aware that these awful problems exist and possibly happen in their own cities. And if most people are never aware of a problem, it's not likely to ever get enough attention and funding to be solved.
So that is the sense in which I can find agreement with bilsbie's notion that merely filtering out the bad stuff humans produce can be more toxic than letting people see what humans are really like. Though again, it's hard to blame any one team for just wanting to filter that stuff out.
---
And there is actually at least one unrelated other way to look at bilsbie's comment so you might ask them what they meant as well. The other way I can think of is asking who decides what's acceptable content? And then expand the context on that to a hypothetical world where stuff like ChatGPT is ubiquitous.
There are some obvious cases we'll all pretty universally call bad, so we don't mind someone training the bots to filter that out. However, there's other examples like a lot of horror books/films or even mystery/detective novels and such that push the boundaries in order to get people to think, if those get caught in the filters too, then we're doing humanity a disservice by effectively banning/digitally-burning books/articles/web-sites. And let's not forget, if people are good at building such filter systems for all the bots you can bet governments with a eye for controlling thought will be interested in installing some filters of their own.
As they try to stop it from producing it, I assumed that critizing this filtering means someone wants the AI to produce those things.