They (Twitter, Facebook, Youtube, even Reddit) act so very much like publishers that it seems to me the mistake was granting "platform" protections to sites that: claim broad rights over posted content; use complex logic to decide what to promote (that they promote anything is alarming, for a "platform"!) and what a visitor sees while also hosting and distributing that content they're highlighting or promoting (which makes them distinct from some rando's best-of lists on their personal website, linking to content hosted elsewhere); place ads alongside content but sometimes choose not to; and, at times, engage in revenue sharing. And that's before we even get into the censorship.
The whole point of section 230 was to make imperfect moderation legal. It wasn't enacted because some random personal sites got sued. It got enacted because some very large companies got sued and congress thought the results of the court cases were illogical.
Perhaps the most relevant distinction is between "moderation that the users have a choice about" versus "moderation that the users don't have a choice about".
If a site wants to hide all posts/videos that promote some unpopular political belief, or use offensive words, then implementing that censorship as the default user experience is perfectly acceptable, as long as users can choose to opt out of that censorship.
There might be multiple reasons why a given post/video could be censored, and perhaps there is a small burden on sites to tag every single reason rather than mark it for censorship at the first excuse, but I think that a lot of the tagging work could be made the responsibility of the user who uploaded it.
Such a system would hopefully make moot the slightly disingenuous argument that "If sites can't ban political opinions I don't like then they also won't be able to ban spam". Obviously sites would be allowed to put neutral resource limits on users, to prevent DoS attacks.
It distinguishes between something and a publisher, in that it says whatever-you-want-to-call-that-something can't be treated as the publisher (it uses that word) of information it's distributing.
It distinguishes between the person who uploaded the video and YouTube hosting the video. How YouTube exerts editorial control to promote some videos or delete others is not relevant. This is a good rule. HN couldn’t possibly exist without it.
There’s been a lot of really bad information on 230 from people who ought to know better.
I got mine from the text of the law. It does what you say, and also what I say. I definitely doesn't not distinguish between a publisher and a service provider (host, platform, whatever). It does so explicitly.
Unless you know of a way to do moderation 100%, liability for user-generated content is infeasible.
The fulfillment of this fantasy of forcing platforms to abandon their efforts will just lead to all of social media degenerating into cesspits as they fill up with porn and swastikas and all normal people leave.
> Unless you know of a way to do moderation 100%, liability for user-generated content is infeasible.
I agree that highly-public social media anything like what we see now wouldn't work anymore.
I don't even necessarily think that we should kill 230, but I don't think you should be able to curate and promote content, and claim strong rights to posted content, and still enjoy its protections. Yes, this means "algorithm-curation" social media with broad public visibility of content and that claims significant ownership of posted content, would be in trouble. I think services like that should struggle to operate that way. Take ownership or don't, none of this pretending to be one thing while doing another stuff. That doesn't mean we have to crack down on web hosts or ISPs or email providers or anything like that, since they're not doing most of that stuff.