But should they decide what's right and wrong based on what is objectively right and wrong?
I think it's an interesting question. If you build a tool that ends up being used as a platform to spread misinformation and lies, is that ok? Is it ok to censor that kind of thing?
Set aside laws and general feelings about censorship. If certain kinds of speech are genuinely harmful to society, and if you can actually objectively define that (I know, very hard if not impossible to do in many/most situations), should you still allow that speech?
It's certainly a judgment call, and a lot of people might get that call wrong sometimes or even often. But is it pointless or harmful to try?
Think about moderated message boards. No one would take issue with a moderated message board where the moderators act to keep things on-topic and civil. (Hell, HN tries to be that, and IMO does a pretty good job most of the time.) Twitter has chosen, with the exception of things like hate speech and threatening behavior (etc.) to be hands-off. Was that a good choice? I'm not sure. I don't think it's unreasonable to think they could do just as well -- if not better -- if there was moderation of some kind built in.
I think it's an interesting question. If you build a tool that ends up being used as a platform to spread misinformation and lies, is that ok? Is it ok to censor that kind of thing?
Set aside laws and general feelings about censorship. If certain kinds of speech are genuinely harmful to society, and if you can actually objectively define that (I know, very hard if not impossible to do in many/most situations), should you still allow that speech?
It's certainly a judgment call, and a lot of people might get that call wrong sometimes or even often. But is it pointless or harmful to try?
Think about moderated message boards. No one would take issue with a moderated message board where the moderators act to keep things on-topic and civil. (Hell, HN tries to be that, and IMO does a pretty good job most of the time.) Twitter has chosen, with the exception of things like hate speech and threatening behavior (etc.) to be hands-off. Was that a good choice? I'm not sure. I don't think it's unreasonable to think they could do just as well -- if not better -- if there was moderation of some kind built in.