I don't think that really deals with beheading videos, incitement to terrorism, campaigns to harass individuals and groups, child porn, and many cases where online communities document or facilitate crimes elsewhere.
Child porn is illegal. Are beheading videos illegal? Incitement to terrorism is probably a crime (though I'd argue that it should be looked at under the imminent lawless action test[1] as it's speech). So all of these would be removed and are not part of a moderation discussion.
As to "many cases where online communities document or facilitate crimes elsewhere", why criminalise the speech if the action is already criminalised?
That leaves only "Campaigns to harass individuals and groups". Why wouldn't moderation tools as powerful as the ones employed by Twitter's own moderators deal with that?
The problem here is that the default assumption is that everyone on the internet is under the jurisdiction of US law, when the majority in fact are not.
These are global platforms with global membership, simply stating that “if it is free speech in America it should be allowed” isn’t a workable concept.
How about saying that if it is free speech in America it should be allowed in America, but censored in countries where it is against the law? It seems very easy to say.
So different users aren’t able to see full threads based on their location? You’re seemingly randomly able to respond in some circumstances and not others?
When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads?
> "So different users aren’t able to see full threads based on their location? You’re seemingly randomly able to respond in some circumstances and not others?"
Of course. That is what they've demanded, so that is what they get.
> "When there are people all over the globe participating in the same discussion, you can’t realistically have an odd patchwork of rules. "
On the contrary: You must have this. As a matter of law. There is no alternative, other than withdrawing from those countries entirely and ignoring the issue of people accessing your site anyway (which is what happens in certain extreme situations, states under sanctions, etc)
> " It’s very common for people on this forum, for example, to be commenting on their experiences in Europe, where free speech is heavily curtailed in comparison to the states. How do you manage such threads? "
Here are the options:
1) Do not do business in those countries.
2) Provide different services for those countries to reflect their legal requirements.
There is no way to provide a globally consistent experience because laws are often in mutual conflict (one state will for example prohibit discussion of homosexuality and another state will prohibit discriminating on the basis of sexual preference)
That's correct and that's actually how it works right now (Germany has different speech laws and Twitter attempts to comply with them[1]). However, it is an American company and it's not unreasonable to follow the American law in America. I would also think it's quite possible to use the network effect of the service to bully places like Germany into allowing greater expression, or simply providing it on the sly by making it easy for Germans to access what they want. Although, I do see the EU is trying to do the same in reverse, probably to (as is its wont) to create a tech customs union that allows its own tech unicorns to appear (something it has failed miserably at, in part because of its restrictive laws).
If I had a tool that could (at least attempt to) filter out anti-semitism or Holocaust denial, then Germany could have that set to "on" to comply with the law. I'm all for democracies deciding what laws they want.
'x is illegal' is a cop-out (albeit often unintentional) and I wish people would stop using it. anything can be made illegal, are you just going to roll over if expressing an unpopular idea becomes a crime? Conversely, illegality doesn't deter a lot of people and many are skilled at playing with the envelope of legality, so absent any moderation you'll get lots of technically/arguably legal content that is designed or degrade or disrupt the operation of a target forum.
It's unhealthy to just throw every difficult problem at courts; the legal system is clumsy, unresponsive, and often tends to go to unwanted extremes due to a combination of technical ignorance, social frustration, and useless theatrics.
We're talking about a social media service adhering to one of the most liberal set of speech laws and norms in the world, not the imposition on the population of an unjust law. Tell me I can't say the word "gay" on threat of imprisonment and I'll say it more but that's not relevant to this discussion.
It's the "documentation of the crime" aspect of child pornography that makes it illegal. It is still technically illegal in parts of the US to possess, say, drawn illustrations of pornography featuring minors (what 日本人 call "lolicon") but the legal precedents are such that it can't really be prosecuted.
That is, it's not clear in the US you can ban something on the basis of it being immoral, you need to have the justification that it is "documentation of a crime".
What makes child porn illegal is the argument that anyone who views or distributes it is re-abusing the victim. Otherwise, it would be justifiably illegal to create but not to view, possess, or distribute. Yet all are illegal in the USA.
This does not stop the FBI from being a major child porn distributor, despite that meaning the FBI is re-abusing thousands of victims under this rubric.
> What makes child porn illegal is the argument that anyone who views or distributes it is re-abusing the victim.
That's what makes it illegal? What if it's done on a private forum that the victim never finds out about? What if the victim is, say, dead? I don't think those change the legality.
Here's there's a major difference between USA and EU law, and I daresay culture as well: how private information is viewed.
As far as I understand in the EU private information is part of the self. Thus, manipulating, exchanging, dealing with private information without the person's consent is by itself a kind of aggression or violation of their rights. Even if the person never finds out.
In the USA however private information is an asset. The aggression or violation of right only happens when it actually damages the victim's finances. So if the victim never finds out about discussions happening somewhere else in the world, well… no harm done I guess?
Both views are a little extreme in my opinion, but the correct view (rights are only violated once the victim's own life has been affected in any way), is next to impossible to establish: in many cases the chain of events that can eventually affect a person's life is impossible to trace. Because of that I tend to suggest caution, and lean towards the EU side of the issue.
Especially if it's the documentation of a crime as heinous as child abuse.
That's the rubric courts and legislatures in the USA have used.
It is, in general, really really difficult to pass speech laws in the USA because of that pesky First Amendment -- even if they're documentation of a crime. Famously, Joshua Moon of Kiwi Farms gleefully hosted the footage from the Christchurch shooting even when the actual Kiwis demanded its removal.
But if you can argue that procurement or distribution of the original material perpetuates the original crime, that is, if it constitutes criminal activity beyond speech -- then you can justify criminalizing such procurement or distribution. It's flimsy (and that makes it prone to potentially being overturned by some madlad Supreme Court in the future with zero fucks to give about the social blowbacks), but it does the job.
In other countries it's easy to pass laws banning speech based on its potential for ill social effects. Nazi propaganda and lolicon manga are criminalized in other countries, but still legal in the USA because they're victimless.
If this makes you wonder whether it's time to re-evaluate the First Amendment -- yes. Yes, it is.
I'm in favor of the First Amendment remaining at least this strong. None of the above things strike me as nearly as dangerous as "the ruling party being able to suppress criticism and opposition by claiming that their opponents' words have potential for ill social effects".
Well, based on https://cbldf.org/criminal-prosecutions-of-manga/, it seems you probably can beat the charges, but it will take years and an expensive legal defense. People have been prosecuted and usually take plea bargains, so some amount of jail time can be expected. Simple cases of "manga is child porn! yadda yadda" can probably be overlooked but if the police don't like you for some reason, getting arrested is definitely a risk. Although there is supposed to be "innocent until proven guilty" even getting arrested can disqualify you from many jobs.
> even getting arrested can disqualify you from many jobs.
That's something that I think is seriously wrong with the USA right now: the idea of an "arrest record", or at least the idea of it being accessible by anyone other than the police.
There are a number of situations where it is perfectly reasonable to arrest innocent people, then drop all charges. Let's say the cop arrive at a crime scene, there's a man on the ground lying in a pool of blood, and another man standing with a smoking gun holstered at their hip. Surely it would be reasonable to arrest the man that's still standing and confiscate their gun, at least for the time necessary to establish the facts?
But then once all charges has been cleared (say the dead guy had a knife and witnesses identify him as the aggressor), that arrest should be seen as nothing more as either a mistake or a necessary precaution. It's none of potential employer's business. In fact, I'd go as far as make it illegal to even ask for arrest records, or discriminate on that basis.
That's genuinely interesting (have an upvote) but a social media site's responsibility in a situation such as this is to assess legality, not prosecutability, hence it would be removed.
Anime image boards are not in a hurry to expunge "lolicon" images because they don't face any consequence from having them.
I wouldn't blame Tumbler from banning ero images a few years back because ero image of real people are a lot of trouble. You have child porn, revenge porn, etc. Pornography produced by professionals has documentation about provenance (every performer showed somebody their driver's license, birth certificate, probably got issued a 1099) if this was applied to people posting images from the wild they would say people's privacy is being violated.
I'm not here to debate the legality of child porn or lolicon images, and I fail to see the relevance of what you've written to the provision of moderation tools to the users of Twitter.
> Laws don't enforce themselves.
What has that got to do with Twitter? Please try to stay on track.
The vast majority of moderator removed comments and posts on Reddit have nothing to do with the illegal activities you mention.
The vast majority of removed comments are made to shape the conversations.
I think most people would be ok with letting admins remove illegal content, while allowing moderators shape content, as long as users could opt-in to seeing content the mods censored.
This is a win-win. If people don't want to see content they feel is offensive, they don't have to.
Legal vs illegal cannot be enforced on a private platform because the truth procedure for "legal vs illegal" involves a judge, lawyers, often waiting for years.
What you can enforce is "so and so says it is illegal" (accurate 90% or 99% or 99.9% of the time but not 100%) or some boundary that is so far away from illegal that you never have to use the ultimate truth procedure. The same approach works against civil lawsuits, boycotts and other pressure which can be brought to bear.
I think of a certain anime image board which contains content so offensive it can't even host ads for porn that stopped taking images of cosplayers or any real life people because it eliminated moderation problems that otherwise would be difficult.
There is also spam (should spam filters for email be banned because the violate the free speech of spammers?) and other forms of disingenuous communication. When you confront a troll inevitably they will make false comparisons (e.g. banning Kiwi Farms is like banning talk to the effect that trans women could damage the legitimacy of women's sports just when people are starting to watch women's sports)
On top of that there are other parties involved. That anime site I mention above has no ads and runs at very low cost but has sustainability problems because it used to sell memberships but got cut off by payment providers. You might be happy to read something many find offensive but an advertiser might not want to be seen next to it. The platform might want to do something charitable but hosting offensive talk isn't it.
> (should spam filters for email be banned because the violate the free speech of spammers?)
I submit that spam filters should be under the sole control of their end users. If I'm using a Yahoo or Gmail account (I'm not) I should have the option to disable the spam filter entirely, or to only use personal parameters that are trained on the mail only I received, and not email should ever be summarily blackholed without letting me know in some way. If an email bounces, the sender should know. If it's just filtered, it should be in the recipient's spam folder.
> because the truth procedure for "legal vs illegal" involves a judge
This part is not correct. Private companies block what they believe to be illegal activities in their systems constantly - in order to limit the legal liability of being an accomplice to a crime. This is the case in all industries - and is standard practice from banking, to travel, to hotels, to retail... it's commonplace for companies to block services.
For spam, I would recommend that it gets a separate filter-flag allowing users to toggle it and see spam content, separately toggled from moderated content.