Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Twitter and Facebook have extensive systems for keeping the various horrors in check. They are not as good as I think they should be. But they're miles better than what Parler had, which was more of a fig leaf. (Full disclosure, I use to run an anti-abuse engineering team at Twitter. Now I'm at the ADL building the Online Hate Index.)

I will note that whatever antitrust beefs people have with Big Tech, Twitter isn't really in that league. Twitter's market cap is something like 4% of Google's and 6% of Facebook's. I think conflating the two issues here is unhelpful.



“Twitter and Facebook have extensive systems for keeping the various horrors in check. They are not as good as I think they should be. But they're miles better than what Parler had, which was more of a fig leaf.”

If Twitter and Facebook (being multibillion dollar companies with a decade or more to build these algorithms) haven’t solve this problem yet, how can any upstart possibly compete with them? They’d be shut down as soon as users started posting content on their fledgling services.


My NDAs constrain me from saying as much as much as I'd like, but it's a mistake to think that the key to fighting abuse is "algorithms". The heart of it is always human judgment. It starts at the executive level to set policy. That policy needs to be carefully socialized to users. The user need ways to report problems. And then you need trained staff to judge the reported content.

Algorithms can help, of course. But the problem is mainly a human one.


Humans are incredibly biased in making judgements like this. How do you “train” a person to enforce a standard that goes against their inherent biases?

Seems like the more objective the process can be shifted (by moving away from human decisions), the more effective/fair the process becomes.

Anyway, it sounds like you may work in this space, and, therefore, might have further insight, which I am very interested in hearing about.

It is definitely an Achilles Heel for social media platforms.


Unfortunately, there is no real objectivity here. Machine learning systems are fed large numbers of human judgments, are tuned based on human judgment, and then are deployed when other humans think them ready.

The way you get reasonable consistency, whether it's humans or machines, is by establishing clear standards, using them for training, and then continuously monitoring results. It's not perfect, of course. But nothing is.


Being smaller they can and should user human moderation. Twitter and Facebook have problems because of the huge number of posts. Parler is still small enough that then _can_ moderate.


Good point, but it is not scalable.

“Five million people were active on Parler on Monday, which Mr. Matze wrote was an 8-fold increase in user engagement from the previous week.”

https://m.washingtontimes.com/news/2020/nov/10/parler-says-i...

This doesn’t seem like a problem that could be solved with more human eyeballs.


Facebook is in a way better position financially to use human moderators than something like Parler is. Sure, Parler has orders of magnitude less volume. They also have orders of magnitude less money. Facebook can afford to hire 1,000 human moderators for every 1 human moderator Parler hires.

This whole "Parler can be better than Facebook because they are smaller" argument is just as illogical as it sounds.


Where are you getting Parler can be better? Better than facebook at what?

Also, you don't thin smaller groups, companies, whatevers are easier to control?


Doesn't make sense to me. Twitter&FB have more users but they also have much more resources than Parler.


Parler didn't even try to remove a very narrow, specific set of posts referred by AWS. I think if they did that, AWS probably had much harder time to justify booting Parler off from their platform.


How does Twitter's market cap relative to Google or Facebook have anything at all to do with Antitrust law?


Antitrust law is about constraining companies with excess market power, especially ones that use it in anticompetitive ways. That's a legitimate concern for companies like Google and Facebook, who dominate their markets. Twitter's popular, but as far as market power goes, it's far too small to dominate social networking.


I will admit that I dislike the projects you are working on since nebulous 'hate speech' has been undermining US foundations of free speech, but I appreciate the level headed argument.


Thanks!

It depends on what one thinks the foundations of free speech are. For a long time, hate speech has been used to suppress particular groups. In practice, if people with social power can scare disfavored groups into staying quiet, free speech is harmed. Harmed more, in my opinion, than by hate speech restrictions.

In practice, any platform has to choose between hosting abusers and hosting their targets. They will only get one or the other. Given that choice, I would rather boot the abusers. To me the goal of free speech in a democracy is about a maximally informed populace so we make optimal decisions. To the extent that any group wants to convey information, they can do it without abuse. But allowing abuse, especially that targeted at particular groups, limits speech more deeply.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: