People advocating that position usually have a very specific idea about how they want sites to be moderated, but section 230 is about not treating platforms as if they're the speaker when one of their users posts illegal speech, regardless of moderation. Of course, politically biased speech is not illegal, so it's really about punishing platforms for moderation somebody doesn't like.
A more reasonable target for a 230 carve-out would be recommendation algorithms. Those aren't merely passively hosting user-generated content, but actively selecting what they think you should see to keep you engaged with the platform. Featuring content rather than showing it ordered by some simple criterion like time should be treated as editorializing rather than moderation. If a human editor decides to feature lies I tweet about you on their "best tweets of the week" page, you may be able to sue them for libel. If twitter's algorithm shows lies I tweet about you to a large audience, you currently can't.
Arguing that the recommendation algorithm is editorializing is an argument for the choice of algorithm being an instance of free speech which would be protected from such meddling.
I don't think current law and understanding of same allows any major changes to how we treat platforms. I tend to think that any major changes in the law are liable to be for the worse because even well meaning law makers seem to possess a mostly incompetent perspective on tech.
The algorithm would have free speech protections under such a scheme, and it's likely courts in the US would conclude that it does under current law. Those do not necessarily extend to repeating lies that I have published about you, which are not protected as free speech.
The company has a free speech interest in choosing the algorithm to make it clear. Lies might be protected speech but 230 makes it very clear whom you are allowed to sue regarding those lies. Wishing the law was different doesn't change the law.
A more reasonable target for a 230 carve-out would be recommendation algorithms. Those aren't merely passively hosting user-generated content, but actively selecting what they think you should see to keep you engaged with the platform. Featuring content rather than showing it ordered by some simple criterion like time should be treated as editorializing rather than moderation. If a human editor decides to feature lies I tweet about you on their "best tweets of the week" page, you may be able to sue them for libel. If twitter's algorithm shows lies I tweet about you to a large audience, you currently can't.