Totally agree with you. Twitter should be publishing whatever the user wants to convey (unless its violating any local laws). They shouldnt be deciding whats right or wrong based on their personal opinion.
> Twitter should be publishing whatever the user wants to convey (unless its violating any local laws).
That's what they do.
> They shouldnt be deciding whats right or wrong based on their personal opinion.
Well, they could, but they don't and as such they are bigger free speech advocates than you are because you want to restrict what they can do on their website, which is a privately run entity and as such not subject to 'free speech' laws.
If you operate a private platform that very explicitly does not censor opposing political views even if you could do so within the law then you are very much not against free speech.
What about censoring hate speech? Racist speech? Sexist speech? Xenophobic speech? Outright lies? False information? Trump has tweeted all of the above from time to time. What value does that sort of speech have?
You might find the first three to be subjective, and I wouldn't entirely disagree with that, so how about the lies and false information? Should a person in a position of power be permitted to lie to people to sway their opinion? I would have no problem if Twitter were to fact-check Trump's (or anyone else wielding such power) tweets and delete or somehow diminish those that are outright lies, assuming there were checks on such a thing to avoid politicization of that fact checking.
Who gets to decide whether someone's option falls under these categories? Anyone who is put in charge of classifying such things will have a lot of power that can easily be mismanaged.
Furthermore, for what reason would you trust such a person?
If you already distrust Trump will you trust him more if some nameless moderator will decide which posts are okay and which aren't?
Besides, he's the POTUS and he has a lot of money, if twitter will start messing with his posts he can easily set up a service on his own.
> Who gets to decide whether someone's option falls under these categories? Anyone who is put in charge of classifying such things will have a lot of power that can easily be mismanaged.
And will also be selectively enforced, as we've seen on basically every single social media platform today.
But should they decide what's right and wrong based on what is objectively right and wrong?
I think it's an interesting question. If you build a tool that ends up being used as a platform to spread misinformation and lies, is that ok? Is it ok to censor that kind of thing?
Set aside laws and general feelings about censorship. If certain kinds of speech are genuinely harmful to society, and if you can actually objectively define that (I know, very hard if not impossible to do in many/most situations), should you still allow that speech?
It's certainly a judgment call, and a lot of people might get that call wrong sometimes or even often. But is it pointless or harmful to try?
Think about moderated message boards. No one would take issue with a moderated message board where the moderators act to keep things on-topic and civil. (Hell, HN tries to be that, and IMO does a pretty good job most of the time.) Twitter has chosen, with the exception of things like hate speech and threatening behavior (etc.) to be hands-off. Was that a good choice? I'm not sure. I don't think it's unreasonable to think they could do just as well -- if not better -- if there was moderation of some kind built in.
> Totally agree with you. Twitter should be publishing whatever the user wants to convey (unless its violating any local laws). They shouldnt be deciding whats right or wrong based on their personal opinion.
Stop blocking spam, advertising, and government propaganda then.
Absolutist arguments like this have an obvious problem in that certain classes of speech are a problem.