Hacker News new | past | comments | ask | show | jobs | submit login

"If we are principled people, and FB is failing to moderate, then any reasonable person who supported Parler's removal would support Facebook's."

Is Facebook failing to moderate, or failing at moderation? If the standard is perfect moderation, there is no social media. If the standard is a good faith efforts at moderation, Facebook should be tolerated (if not compelled to do better) and Parler should be punished (unless they make good faith efforts to do better).




Is Facebook actually moderating in good faith though?

Consider that divisive, offensive and false content is guaranteed to generate engagement and thus contribute to their bottom-line, while content that doesn't have these traits is less likely to do so. So they're already starting off the wrong way here, when their profits directly correlate with their negative impact on society.

Consider that there is plenty of bad content that violates their community standards on Facebook and such content doesn't even try to hide itself and is thus trivially detectable with automation: https://krebsonsecurity.com/2019/04/a-year-later-cybercrime-...

Consider that Instagram doesn't remove accounts with openly racist & anti-Semitic usernames even when reported: https://old.reddit.com/r/facepalm/comments/kz10nw/i_mean_if_...

Is Facebook truly moderating in good faith, or are they only moderating when the potential PR backlash from the bad content getting media attention greater than the revenue from the engagement around said content? I strongly suspect the latter.

Keep in mind that moderating a public forum is mostly a solved problem, people have done so (often benevolently) for decades. The social media companies' pleas about moderation being impossible at scale is bullshit - it's only impossible because they're trying to eat the cake and have it too. When the incentives are aligned, moderation is a solved problem.


How many massacres and beheadings have been live streamed on FB at this point? And yet very few seem to think FB is the problem.


What are you proposing? Banning live streaming?

Moderation is inherently an after-the-fact phenomenon. People are going to do unpredictable things, sometimes bad.


Brown people don't count, even when they lose their heads. Remember when a plane full of them crashed and there wasn't even a grounding?

It's got to happen to white folk in the US before anything will change.


I suspect the bar for acceptable moderation will always be just a hair below what facebook, twitter, and youtube can manage. Every time they fail again, they'll be hauled in front of congress and explain how they'll rub a little AI on it. It'll become just a little more expensive to compete.


People keep saying Parler intentionally did not moderate. But every actual source I've seen says that they were trying to moderate but lacked the manpower to do so because the platform grew too big too fast.

I'd be interested if anyone can share anything indicating a refusal to moderate.


My two cents: Facebook has an algorithm they use to decide what posts are presented to a user. They should therefore loose their section 230 common carrier status. They are the ones deciding to put toxic and divisive information in front of users to drive engagement, instead of simply sharing posts in chronological order and letting users control all the filtering.



That's not what section 230 says today, but there's a very interesting debate to be had about what its inevitable replacement should say tomorrow. Ranking posts according to some unexplainable algorithm which includes things like keyword extraction, often "selfishly" to favor engagement, has proven to be far from benign. I think it's quite reasonable to say that as a platform exerts more of this control it should also assume more responsibility. If you're not prepared to take on that responsibility, stick to strict chronological order and/or user defined priorities.

I don't particularly like it when Facebook (for example) buries content from my actual friends and family beneath posts that it thinks might be more engaging. They're usually wrong, BTW; the moment I recognize it as an algorithmic promotion I scroll right past as quick as I can. They certainly shouldn't be showing me stuff from pages and groups I never expressed an interest in. If I want to find new sources I'll ask. If they do those things, they are acting as editors and publishers, and should be treated as such. There are still problems to be solved around groups people have already joined and ads and privacy, but if they'd at least stop pulling every user toward more extreme content - effectively recruiting for the worst of the worst - that would be positive.


Thanks for that clarifying link! I still wonder about this, though. Relevant section (c)(1) says:

>No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

I'm hung up on the words "provided by". Facebook's algorithm controls what is presented to each user. They are providing a view of some posts, and not others. Facebook is creating the wall for each user, right?

Or would all this still considered moderation, allowing them to do what they want? Section (c)(2)(B) mentions not being liable for allowing users to control what content is accessed, but doesn't mention when the provider makes decisions like this.

At an extreme, could facebook use their secret algorithm to promote all posts saying "stolen election" to all republicans, demote contrary posts, and still claim claim section 230 protection because they didn't create the posts even though they could choose whatever they want to go viral amongst millions of various posts?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: