I think one of the things we need to appreciate is the scale of video uploads that YouTube has to deal with. There are something like hours of videos uploaded to YouTube every single minute. Aside from dodgy medical advice, they need to look out for child pornography, revenge porn, snuff videos and incitements to terrorism and violence, not to mention copyrighted content. There's no way they could hire enough people to review every single video that's uploaded, so if they're going to have any review at all, it has to be automated.
Getting their algorithm to have any understanding of content that's being uploaded is an extremely difficult problem, and the fact that they're able to do so with any degree of accuracy is an impressive achievement, whatever the merits. Expecting a YouTube algorithm to be able to parse a nuanced reasonable argument from bullshit is to expect a level of AI sophistication that doesn't exist yet.
YouTube could, and probably should, hire people to review videos from high profile YouTubers, but this is only going to work for people who've already established themselves. There's no way to scale that down to everyone that wants to upload something.
So yeah, moderate voices pointing out that people who have already had covid have a solid degree of acquired immunity, or maybe we shouldn't shut down schools are being clobbered. That's a bad thing but it's tough problem to solve.
I also think there's a broader problem that a handful of private companies have such control over public discourse that they're able to effectively censor ideas at all. Or maybe they're not so effective, but the level of control that Google, Facebook, etc, have should give us pause.
I'm sympathetic to the idea that we should go back to the free for all internet that we had in the 90's where everyone who got online had equal access. This would allow a level of nuanced moderate discussion that we desperately need, but it will also allow crazies, and child porn and terrorists and all the rest. If we don't want that kind of stuff to be easily available online, we need to figure out not just where to draw the line, but how to draw the line. This is a hard problem.
Getting their algorithm to have any understanding of content that's being uploaded is an extremely difficult problem, and the fact that they're able to do so with any degree of accuracy is an impressive achievement, whatever the merits. Expecting a YouTube algorithm to be able to parse a nuanced reasonable argument from bullshit is to expect a level of AI sophistication that doesn't exist yet.
YouTube could, and probably should, hire people to review videos from high profile YouTubers, but this is only going to work for people who've already established themselves. There's no way to scale that down to everyone that wants to upload something.
So yeah, moderate voices pointing out that people who have already had covid have a solid degree of acquired immunity, or maybe we shouldn't shut down schools are being clobbered. That's a bad thing but it's tough problem to solve.
I also think there's a broader problem that a handful of private companies have such control over public discourse that they're able to effectively censor ideas at all. Or maybe they're not so effective, but the level of control that Google, Facebook, etc, have should give us pause.
I'm sympathetic to the idea that we should go back to the free for all internet that we had in the 90's where everyone who got online had equal access. This would allow a level of nuanced moderate discussion that we desperately need, but it will also allow crazies, and child porn and terrorists and all the rest. If we don't want that kind of stuff to be easily available online, we need to figure out not just where to draw the line, but how to draw the line. This is a hard problem.