I think this might be a misinterpretation of the word "responsible".
If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
Government sets the rules, and if someone fail to comply, there are consequences against those responsible. Government isn't responsible, it is holding them responsible.
> If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
The platform isn't lying, any more than if I write "1 equals 2" on a letter and send it via the mail system to someone else is the mail system lying.
What if the platform decides that people will send more mail if they piss them off by copying your letter and sending it to everyone in their service area? Is that still just you lying? At what point does the platform become responsible for amplifying what you said? Are you responsible when it's amplified to everyone if all you ever intended was sending it out into the void?
Yes, the analogy to the postal service falls apart when discussing one-to-one correspondance. That's more like a DM than a social media.
If however some of your junk mail included mass mailings of brochures to join the KKK or some neo-nazi group, I could see why people would want the postal service to crack down on that. That is a fair analogy.
Social media platforms do not operate like the mail or telecommunication infrastructure. Suppose that a clique of high-follower verified users on X formed a private discord channel in which they coördinate false but plausible moral panic news events in order to foment spontaneous violent acts against minorities (for added effect, perhaps by pooling their resources into generative AI tools), and that both platforms refused to address this by shutting down the channel, banning the users, or even reducing their reach on the timeline. While there remain reasonable arguments against governing this bad behavior through legislation, it is plain that the social media platforms would be implicated in the negative outcomes of the behavior to a greater degree than a mail carrier.
A “ministry of truth” would (I assume) be a part of the executive branch of government.
Whereas the creation of laws and the interpretation of laws are powers that the executive branch does not have, and are held separately by the legislative and judicial branches.
In a, well, y’know “functioning” democracy. Apparently.
The US already has that. What do you think the courts do?
People complaining about building a "ministry of truth" in countries with anything resembling a functioning legal system are just as clueless as people who cry about "government death panels" while private insurance already denies people lifesaving medicine right freaking now
Judges. The question is mainly whether there should be some rules independent of those of the companies that the content must follow, and people who feel treated badly need to get their rights against those rules in a civil lawsuit, or whether more should be allowed first until there is a civil or penal lawsuit that might stop it. (It is already a mixture of both so it's a matter of degree.)
I personally prefer an emphasis on the first solution because it's better to combat the widespread lack of civility in social media, which I believe to harm society substantially, but I also understand the people who prefer the second model.
This is exactaly right. To silence even offensive ideas is to appoint someone as the final arbiter of truth — something history shows to be dangerous. Truth doesn’t need protection — it needs criticism. Censoring ‘offensive’ or ‘sacred’ ideas kills the very process (open debate) that lets society correct errors & find truth, even if it's uncomfortable. Everthing else is dogma.
In most common law countries, juries fill that roll.
Currently, in the US, internet companies get a special exemption from the laws that apply to other media companies via the DMCA. If traditional media companies publish libelous material, they get sued. Facebook and Google get a "Case Dismissed" pass. Most people look at the internet and conclude that hasn't worked out very well.
It's unclear to me what a good solution would look like. If platforms didn't have those protections, they probably wouldn't be able to exist at all. Any moderation would have to be perfect, or they would be open to lawsuits. And no platform could afford that. It's a tough one.
Are you implying that there’s no posts on social media platforms that are plain and verifiably wrong and that any such decision needs to be made by a government created ministry of truth? There’s no middle ground? Maybe such a thing like a court?
If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?
"If I state here plain and as a fact that golieca eats little children for breakfast and slaughters kittens for fun, could @dang not look at both a statement from you and one from me and see if I have sufficient proof?"
Nah, he would just (shadow)ban you.
But in general we had that debate long and broad on what truth means with Covid. Who decides what the scientific consensus is for instance. (I don't remember a crystal clear outcome, though). But in case of doubt, we still have courts to decide.
There’s a lot of grey areas - statement of fact vs opinion, open scientific consensus, statements about public figures vs. private individuals, … But the post I’m responding to basically says “there is no truth, let’s give up.” and that’s just as false.
Yeah, the US in particular functions on liability. If changes are made that make companies liable for the externalities from their platforms, they will almost instantly find ways to address the issues.
but if the platform didnt go verify all content placed by users on it, does it count as "spreading" it?
I mean, there's nothing stopping anyone from publishing a book which spreads lies and malicious content - book banning is looked down upon these days. Why are the publishers not asked to be held to this same level? What makes a web platform different?
If a platform lies or spread malicious content, it seems people want the platform to have the liability and consequences for the malfeasance. That is what most people mean by "responsible".
Government sets the rules, and if someone fail to comply, there are consequences against those responsible. Government isn't responsible, it is holding them responsible.