I don't have a strong view on this law – I haven't read enough into it. So I'm interested to know why you believe what you've just written. If a country is trying to, for example, make harder for CSAM to be distributed, why shouldn't the person operating the site where it's being hosted have some responsibility to make sure it can't be hosted there?
For one thing, because that person is not obliged to follow due process and will likely ban everything that even might even vaguely require them to involve a lawyer. See for example YouTube’s copyright strikes, which are much harsher on the uploader than any existing copyright law.
Your argument is that it's better to have the illegal stuff (say, CSAM) online than for a site owner to, for practical reasons, ban a lot of legal stuff too? Why?
Some sorts of goods should be prioritized over some sorts of bads. There would be no terrorism if we locked every human in a box and kept them there, yet you do not support this position, why? I jest, but I think public discourse is an unalloyed good and I would rather we not compromise informal small discourse for the sake of anti-terrorism, anti-CSAM, etc. These things won’t be fully rooted out, they’ll just go to ground. Discourse will be harmed though.
Let's consider two ways of dealing with this problem:
1) Law enforcement enforces the law. People posting CSAM are investigated by the police, who have warrants and resources and so on, so each time they post something is another chance to get caught. When they get caught they go to jail and can't harm any more children.
2) Private parties try to enforce the law. The people posting CSAM get banned, but the site has no ability to incarcerate them, so they just make a new account and do it again. Since they can keep trying and the penalty is only having to create a new account, which they don't really care about, it becomes a cat and mouse game except that even if the cat catches the mouse, the mouse just reappears under a different name with the new knowledge of how to avoid getting caught next time. Since being detected has minimal risk, they get to try lots of strategies until they learn how to evade the cat, instead of getting eaten (i.e. going to prison) the first time they get caught. So they get better at evading detection, which makes it harder for law enforcement to catch them either. Meanwhile the site is then under increasing pressure to "do something" because the problem has been made worse rather than better, so they turn up the false positives and cause more collateral damage to innocent people. But that doesn't change the dynamic, it only causes the criminals to evolve their tactics, which they can try an unlimited number of times until they learn how to evade detection again. Meanwhile as soon as they do, the site despite their best efforts is now hosting the material again. The combined costs of the heroic efforts to try and the liability from inevitably failing destroys smaller sites and causes market consolidation. The megacorps then become a choke point for other censorship, some by various governments, others by the corporations themselves. That is an evil in itself, but if you like to take it from the other side, that evil causes ordinary people chafe. So they start to develop and use anti-censorship technology. As that technology becomes more widespread with greater public support, the perpetrators of the crimes you're trying to prevent find it easier to avoid detection.
You want the police to arrest the pedos. You don't want a dystopian megacorp police state.
That is not the argument. The argument is that, with appropriate court order, a site operator must take down the illegal material (if it hasn’t already been moderated out). However, the site owner should not be liable for that content appearing on their site since it was not put there by them and since there is value in uncensored/unmoderated online communities. The person who posted the content should be liable, not the site owner. In neither case is the content just freely siting there harming the public and unable to be removed because nobody is liable for punishment.
I think an interesting alternate angle here would be to require unmoderated community admins to keep record of real identity info for participants, so if something bad shows up the person who posted it is trivially identifiable and can easily be reprimanded. This has other problems, of course, but is interesting to consider.