Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is that in case of email we are talking about clear cases of spam. In the case of Fediverse instances the reason for blocking are ideological differences. That's exactly what should be left to the individual, as the opinions diverge a lot. For spam everyone agrees that it should be blocked.


Rigid fixation on spam is an overly constrained understanding of the problem.

Email spam is a problem because it directly attacks the utility and value of the communications channel, driving people to other alternatives (or none at all in some cases). Similar issues exist with telephony abuses (robocalls, scam calls, spoofing, privacy invasion and sruveillance, etc.).

In the case of group discussion / social / microblogging platforms, a key dynamic is the nazi bar problem (let one in and you're now running a nazi bar), and the race-to-the-bottom dynamic of various forms of harassment and intimidation: those voices which don't feel safe talking on a platform or channel won't talk on that channel. They're denied a platform, and the platform is denied their voice.

(The Fediverse is actually under fairly sustained criticism by those voices for not having sufficient tools, policies, and/or enforcement.)

For commercial, advertising-supported platforms, an additional consideration is advertisers' sensibilities, and the fact that high-value advertising, brand-safe content, and attractive advertising audiences are all factors which are dependent in large part on content moderation policies. This doesn't apply generally to the Fediverse (though individual ad-supported instances might appear within it, as with Threads). It does strongly apply to Twitter and Facebook's properties generally, however.

There's also the observation that clue flees stupidity and/or banality. The more a channel is taken over by any low-signal content (whether that's overtly abusive or not), the less that intelligent and substantive contributors will care to engage with that channel.

That again is a dynamic I've observed for many decades now online, and am coming to appreciate has a long prior offline history before that.

And also, again, these are all cases where systemic abuse requires systemic response. Your initial comment is not only naive but demonstrable infeasible. It's been tried, repeatedly, and it simply does not work.

The fact that we're having this discussion on a forum in which there are in fact system-level controls over what does and does not appear, and no individual user tools to accomplish same (bar hiding specific stories) somewhat underlines my point.


I disagree. The claim that everything without censorship becomes a "Nazi bar" is ridiculous. It is only defensible if you have an absurdly wide definition of "Nazi". The claim that advertising and "brand-safe" content justifies political censorship is an especially sad one. As for the existence of alleged "systemic" abuse: I don't see any evidence that it exists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: