Hacker News new | past | comments | ask | show | jobs | submit login

> Material which is offensive, abusive, demeaning, misleading is often lawful and it often doesn't create doesn't create substantial real liability

The bits of Twitter, Facebook, and Google I am exposed to are filled with scammers, spam, and explicit threats. These posts would get you arrested shouted from a street corner in the U.S, much less if they were amplified in printed entertainment or whatever.

Even the AppStore is filled with straight up fraud.

These are only minimally culled because that is the most profitable thing to do. Spam apps and fraud create engagement and generate revenue. Moderation costs money.

230 shouldn't be repealed (that would be apocalyptic), but it needs some more holes punched through it, kinda like the existing CSAM holes.




Where the posts are that serious and actionable why aren't their authors being arrested or sued? S230 provides no protection for them. Why do you think civil liability for the platform is going to be effective where criminal liability for the speaker wasn't? Why won't the criminal threatener send an email or a letter where the platform won't have any opportunity to see it?

Why not assume the same bad actors would just use expanded liability as a weapon themselves? and that it would still be ineffective just as the existent non-platform criminal and civil liabilities are ineffective for the things you're concerned with?


Because no one cares or can afford to go after a few hateful posters, especially anonymous ones.

Fraudsters aren't necessarily even in U.S. jurisdictions.

> Why not assume the same bad actors would just use expanded liability as a weapon themselves?

Oh, the big platforms would not let this happen. Otherwise trolls would already be wielding CSAM as a blunt weapon, but look how effectively that is quelched.


Two multi-billion dollars lawsuits by a fraudster that I'm personally being victimized with right now beg to differ. Scammers and abusive people absolutely do abuse the courts, the rate is lower than other kinds of abuse because they have to be well funded to do it-- but unlike rude or threatening comments online you can't just ignore a court.

I can also say that first hand that Wikipedia likely would have been destroyed in 2006 by vexatious litigation if it weren't for S230. I've been involved in a number of other online forums and people trying to extort through legal threats is basically a constant, I'd be surprised if HN doesn't get them. With S230 these threats are fairly toothless. If they had any bite at all most smaller services just couldn't exist because the cost of dealing with them quite easily dwarfs the cost of providing the forum.

The fundamental issue I think your view faces is that even without S230 protection there is a lot of bad stuff in the world that we just can't stop. You could conjecture some further restriction of S230 that was narrow enough that it wouldn't make the liability an abuse vector, but since there is so much bad that we can't stop even where S230 is helpless, and even when the parties aren't like the big social media platforms and nearly immune to litigation... it's hard for me to imagine how limitations narrow enough to avoid abuse won't just also be pointless/ineffective.

I fully agree that there is bad crap out there-- but that doesn't mean that something can actually be done about it.

It's hard to discuss without a concrete proposal. Advocating for it absent one as you've done also seems dangerously close to advocating for any reduction, well considered or otherwise. I think at the end of the day our problems are anti-trust not content liability. The horrible practices of platforms wouldn't be such a big deal were it not for network effect lock-ins.


Yeah, I would never want these exceptions to apply to smaller companies.

> I think at the end of the day our problems are anti-trust not content liability. The horrible practices of platforms wouldn't be such a big deal were it not for network effect lock-ins

Yeah, touche.

Antitrust just seems like an intractable problem in the current political climate, while punching large-cap-only holes in 230 (even if for the wrong reasons) feels reachable.


230 should also apply to CSAM. If a provider has good faith unknowledge that their platform hosts CSAM, that absolutely should not get the provider a life sentence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: