The desire for protection isn’t the same as saying 230 actually applies. The case made it to the Supreme Court because it isn’t clear where exactly the law does and does not apply.
User content and the promotion of user content are different things. If Facebook picks a specific message out of the billions posted they can find basically any message ever said. The choice of a handful of messages to post on a TV commercial moves the message from user content to Facebook’s message.
Legally 230 could be limited to direct content and it’s moderation (removal) but not cover manual curation. Similarly purely algorithmic feeds may be yet another meaningful distinction.
It’s a surprisingly complicated topic and I doubt the Supreme Court will make a broad ruling covering every case.
Funnily enough DMCA 512 already works this way. If you manually curate a content feed you lose your copyright safe harbor. So you're actually incentivized to remain willfully blind to certain aspects of how your site is being used. The Copyright Office has been complaining about this and arguing that we should pull all recommendation systems outside of the copyright safe harbor.
I kind of disagree with this. It would make both safe harbors kind of nonsensical, because we're incentivizing platforms to keep their systems broken. We understand that free speech on the Internet requires a minimal amount of censorship: i.e. we have to delete spam in order for anyone else to have a say. But one of the ways you can deal with spam is to create a curated feed of known-good content and users.
Keep in mind too that "purely algorithmic feeds" is not a useful legal standard. Every algorithm has a bias. Even chronological timelines: they boost new posts and punish old news. And social media companies change the algorithm to get the result they want. YouTube went from watch time to engagement metrics and now uses neural networks that literally nobody understands beyond "it gives better numbers". And how exactly do you deal with an "algorithmic" feed with easter eggs like "boost any post liked by this group of people"?
The alternative would be to do what the Copyright Office wants, and take recommendation systems out of the defamation and copyright safe harbors entirely. However, if we did this, these laws would only protect bare web hosts. If you had a bad experience with a company and you made a blog post that trended on Facebook or Twitter, then the company could sue Facebook or Twitter for defamation. And they would absolutely fold and ban your post. Even Google Search would be legally risky to operate fairly. Under current law, the bad-faith actor in question at least have to make a plausible through-line between copyright law and your post to get a DMCA 512 notice to stick.
By purely algorithmic systems I mean something like the a hypothetical Twitter timeline showing the top 4 tweets of everyone you’ve followed in purely chronological order. Or a Reddit feed purely based on submission time and upvotes.
A curated feed being something like the current HN front page where websites from specific manually chosen domains are penalized.
I am not saying there is anything inherently wrong with curation, it may simply to reflect what users want. However, as soon as you start making editorial decisions it’s no longer purely user generated content. Which was the distinction I was going for, it’s still an algorithm just not a blind one.
> Every algorithm has a bias.
Using upvotes, deduplicating, or penalizing websites based on the number of times they have been on the front page in the last week definitely has bias, but it isn’t a post specific bias targeted by the website owner. I agree the lines aren’t completely clear, when you start talking AI the story specific bias can easily be in how the AI was trained, but I suspect something that flags child porn would be viewed differently than something that promotes discrimination against a specific ethnic group.
User content and the promotion of user content are different things. If Facebook picks a specific message out of the billions posted they can find basically any message ever said. The choice of a handful of messages to post on a TV commercial moves the message from user content to Facebook’s message.
Legally 230 could be limited to direct content and it’s moderation (removal) but not cover manual curation. Similarly purely algorithmic feeds may be yet another meaningful distinction.
It’s a surprisingly complicated topic and I doubt the Supreme Court will make a broad ruling covering every case.