> The catch here is that people are not aware that their content is frequently shadow moderated
Is that really the catch? Reddit didn't get to where they are today by saying "we totally allow free speech". Since its inception, it's been a trope that Reddit actively moderates away actively harmful content, and even constructive-but-contrarian content to the point where every small subreddit community is a circlejerk/echo chamber for its own ideals. Anyone that just downloads the app might not know it at first, but if they actually dive into creating an account and joining subreddit communities, they'll learn soon enough since moderators try not to make their own community mad by silencing all criticism of themselves (when they do, it tends to be exposed pretty quickly), and any huge shifts in policy tend to sprout divergent communities, like how r/superstonk was created out of a wallstreetbets moderation issue.
I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered, with the exception of legal porn on social media apps. It's just the way that anything and everything gains enough traction to become relevant among the masses, and they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images right next to pictures of puppies with 0 algorithmic filtering or sorting.
Yes, users don't know about this. That is clear from the above quotes.
> Reddit didn't get to where they are today by saying "we totally allow free speech".
I don't know whether shadow moderation was necessary for Reddit to grow. They certainly don't inform users about it. That's manifested by the secrecy inherent in the feature itself.
> I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered
People expect that authors are informed when their content is removed. That isn't happening with any consistency on any of the platforms, and it's built into the system. It is not a choice made by mods.
> they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images
That's just the result of sidelining today's unpopular extreme. At both 60 and 120 years ago, information about gay marriage or contraception was considered to be immoral by those in power. It depends who's in charge. The environment will flip at some point, so don't give up your free speech principles. The shoe will eventually be on the other foot.
Is that really the catch? Reddit didn't get to where they are today by saying "we totally allow free speech". Since its inception, it's been a trope that Reddit actively moderates away actively harmful content, and even constructive-but-contrarian content to the point where every small subreddit community is a circlejerk/echo chamber for its own ideals. Anyone that just downloads the app might not know it at first, but if they actually dive into creating an account and joining subreddit communities, they'll learn soon enough since moderators try not to make their own community mad by silencing all criticism of themselves (when they do, it tends to be exposed pretty quickly), and any huge shifts in policy tend to sprout divergent communities, like how r/superstonk was created out of a wallstreetbets moderation issue.
I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered, with the exception of legal porn on social media apps. It's just the way that anything and everything gains enough traction to become relevant among the masses, and they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images right next to pictures of puppies with 0 algorithmic filtering or sorting.
0: https://9to5mac.com/2022/01/11/tumblr-for-ios-updated-with-s...
1: https://www.pcmag.com/news/tumblr-explains-why-it-still-bans...