Hacker News new | past | comments | ask | show | jobs | submit login
The distinction between free speech as a legal right and cultural value (thefire.org)
168 points by minding on Dec 1, 2022 | hide | past | favorite | 411 comments



If you are reading a piece about free speech and content moderation on platforms like Twitter, look to see if it meaningfully engages with the concepts of spam and porn, which are both fully legal speech, and both heavily moderated with little controversy. (This piece does not.)

Why does this matter? Because free speech, free expression in general, is a means to express values and beliefs. Our values and beliefs are integral to the way we each evaluate the content decisions of private parties like social platforms, publishers, and people.

No one values spam, so no one writes hand-wringing essays about whether it is ok to kick spam and spammers off social media. Most people recognize that porn, even if they like it, is not something that should be in everyone’s faces all the time, so there is little existential concern for free speech as a whole when porn is excluded or hidden from some contexts.

You cannot expect to make sense of free speech if you try to ignore other values.


Irrelevant speech is noise. Freedom of speech is freedom to speak to those who might want to hear, not to push ads and spam and porn on those who don't want it. There's no freedom of speech in a night club. If there's noise you can't speak.

It doesn't stop with porn, if you're on a social platform to connect with friends and all you see is political things you don't care about, that's noise too.

If a social media platform doesn't let you supress things you don't want to see, that's a problem with the platform.

If a social media platform doesn't let you supress speach you don't want others to hear, problem is with you.

If you're trying to muddle the waters of free speech with false narratives and strawman arguments like spam and porn, that's just cheap manipulation.

This has nothing to do with values, if someone doesn't want to see porn, he should be able to avoid it, and if someone wants to see porn he should be able to see it. Spam is by definition something nobody wants to see.

Only collectivists and authoritarians would start going off on "society" values as if their values are superior enough to warrant forcing.


> If there's noise you can't speak.

And that's really what moderation is ultimately about. On every forum and platform: suppressing noise and increasing the signal-to-noise ratio.

> If you're trying to muddle the waters of free speech with false narratives and strawman arguments like spam and porn, that's just cheap manipulation.

How is that trying to muddy the waters? It's absolutely relevant. They're both great examples of speech that's completely legal and yet universally banned.

> Spam is by definition something nobody wants to see.

But then what's spam to one person might not be spam to another. There are always some people who do respond to spam.

Same thing with hate speech; most people don't want to see it, but some people disagree on what it is. And let's face it, hate speech is the thing that some people try to defend under the banner of free speech. And unlike spam, hate speech can pose a real danger. It can inspire people to commit violence against the group being hated (this has happened plenty of times). If left unchecked, it can create a culture in which violence against the hated group becomes acceptable.

All meaningful speech has consequences. Calls to violence definitely so. Is this really something that needs to be defended when spam and porn don't?

I'm not saying it definitely does or doesn't, but if you want to have a meaningful discussion about free speech, you have to be willing to go into specifics, and not just wave meaningless platitudes around.


> suppressing noise and increasing the signal-to-noise ratio.

So apparently here lies the gory details. There was a short interview article with one of fired Twitter engineer who claims to had been "basically responsible for messing up your timeline", and as a followup to the article, someone on Twitter who is believed to be that person defended that the end state that GP considers a worse state(so do everyone!) showed better retention through A/B testing.

They must have been doing what they think is good, backed by scientific methods and hard data, SNR ever improving in pandas. Yet everyone is in good agreement that Twitter had betrayed and sabotaged the user community, engaged in artificial manipulations on honest good people, accelerated divisions, radicalization, even promoting terrorism, worldwide.

I think we have elephants in the room, definition of SNR being one thing, or one of its legs. Something is off.

1: https://www.businessinsider.com/pro-elon-musk-twitter-employ...


> hate speech is the thing that some people try to defend under the banner of free speech.

I think it's more of an acknowledgement that there's a price to freedom/liberty and there are fundamental trade-offs involved. I.e. can't have our cake and eat it too.

The difficulty is in finding where exactly the draw the line. For better or worse, currently there are many technically legal yet unsavory views allowed under "freedom of speech".

You've spelled out pretty well the dangers of free speech but if we err to much on the side of caution, that itself causes animosity and discord from (perceived or actual) loss of liberties. And of course, if left unchecked, it can create a culture which embraces more centralized concentration of power, inviting corruption and further concentration.

I'm still wrestling with my own thoughts on where to draw the line and how to balance freedom with safety.


Exactly. I'm not going to claim there's an easy solution to any of this. The line between "valid political opinion" and "hurting someone" can be very thin indeed. Sometimes they overlap, and then what?

All I'm saying is: we should be honest about these trade-offs, and not pretend there's such a thing as "absolute free speech". And I also think we'd better err on the side of allowing too much speech than on the side of too little. But at the same time, I also think events of the past couple of years warrant a reevaluation of those tradeoffs and there we draw those lines. And I don't think Twitter, Musk, or any other social media should be making those decisions for us, but they probably should be a voice in that discussion.


Completely agreed. Although I'm not sure the balance of trade-offs needs to change due to recent events, I certainly believe it's worth discussing.


There’s free speech from a legal perspective and free speech from PR perspective. I’d argue that free speech from a legal perspective is actually pretty free in most western countries.

From a libertarian perspective private platforms should freely decide what they allow and what they won’t. The public can also freely criticise them and move away. Advertisers can freely decide if they want to advertise there.

Every libertarian should visit a true neo-nazi site like Stormfront to see how much we are allowed to say.


Started off strong there but your points and logic fall off.

Porn is not "completely legal". There are many limits to porn within the law.

Inciting violence isn't protected speech. Hate speech isn't protected speech.

The first ammendment states explicitly that the government cannot create laws to restrict the human right to free speech and association. That does not mean every expression is legal. Comparing legal things and illegal things is muddy-ing the water


> All meaningful speech has consequences. Calls to violence definitely so. Is this really something that needs to be defended when spam and porn don't?

To me it's all pretty simple. The platform has its loyalty to the listener. It is up to the listener to decide whether he wants to engage with different kinds of speech, and most networks already can determine that, either by separation into forums and communities or just by general engagement metrics.

If you prevent exposure because the listener doesn't want to be exposed, that's OK.

If you prevent exposure because you don't want the listener to be exposed, that's censorship.

Censorship is suppressing information because it might be listened to, while spam filters and porn filters is suppressing information because it won't be listened to. The ethical and moral boundary is clear, and it is strictly the platform's loyalty to its consumers.

Anything else is inherently positioning the platform in an assumed moral superiority to its users, which given the recent Twitter revelations, is an incredibly bad assumption.

Moderation can be both good and bad. Good moderators are those that remove content nobody will want to see. Bad moderators remove content because people might see it.

There are plenty of calls to violence from the Ukraine side against Russia. None of those got censored. Should they?

I'm not defending the essence of the speech. It should be irrelevant. I'm defending what is a moral intent and what is an immoral intent of the platform. There's meaningful discussion and then there are constructing strawman and fabricated threats and vague ambiguous terms like hate speech.

If your method for deciding whether to censor foofoo is whether there exists a story where foofoo leads to bad outcome, and someone wants foofoo censored, they will create that story (potentially including real life actions). Therefore deciding whether to censor foofoo should be independent of the existence of those stories. They also tend to overgeneralize and use broad categories when the stories themselves are specific anecdotes.

To me the boundaries are pretty clear, and everything else is just people telling pretty meaningless stories, conflating terms, and having inconsistent standards.

Understanding where things stand morally is something you do after discussing them, not before.


> Only collectivists and authoritarians would start going off on "society" values as if their values are superior enough to warrant forcing.

Individuals can have values. I have disagreements with the parent comment [1], but free speech is certainly something that people value.

Disagreement tends to come in regarding what we each value, and the hope is that free speech is a shared one.

[1] https://news.ycombinator.com/item?id=33827292


Nobody ever frames it has having someone censor what they see.

Everyone wants to censor what others see. Why is that?


What about spam from political parties?

I don't have a link now, but IIRC in last USA election the spam filter of gmail blocks the fundraising or "informational" email form the mayor political parties, and it generated a big controversy.


That "controversy" was not in good faith.

https://www.theatlantic.com/technology/archive/2022/11/repub...

> “My parents, who have a Gmail account, aren’t getting my campaign emails,” Representative Greg Steube of Florida told Google CEO Sundar Pichai in July 2020, during a congressional hearing that was ostensibly about antitrust law. “My question is, why is this only happening to Republicans?”

My Gmail spam folder is very clear evidence it happens plenty to Democrats.

https://www.theverge.com/2022/10/13/23403259/google-gmail-re...

> In response, Google has launched a controversial program allowing campaign committees to effectively opt out of spam filters — a huge concession to mounting political pressure from Republicans. But Verge reporting shows the RNC has not taken advantage of the program and made few efforts to alter the core practices that might result in their emails being labeled as spam.

> A source familiar with the matter confirmed to The Verge that, nearly a month after the pilot’s launch, the RNC has not joined or even applied for the program, even as the party continues to mount political and possibly legal pressure against Google. The RNC did not respond to multiple requests for comment regarding the committee’s decision to abstain from the pilot program.


In case it's not clear, I think they should go to the spam as any other spam of whatever party or topic.


"Irrelevant speech is noise"

And who judges the relevancy?


The listener obviously. You can't force anyone to listen anyway. And anyone else deciding means denying people things they wanted to hear.


There's a lot of porn on twitter, and the fact that so many people use porn as an example of something that twitter has to ban, not knowing that twitter doesn't ban porn and it nonetheless isn't in their face all the time, perfectly illustrates that banning things from a platform such as twitter is not necessary, that people are perfectly able to avoid things that are not banned.


I'm in Vietnam and about six months ago I made a new Twitter account for our business.

I can tell you that this new account got absolutely flooded with porn suggestions, all Vietnamese language, a lot of it looking like borderline underage stuff.

I'm not anti-porn in any way but this was pretty awful. My business is a bakery, I did nothing to invite these suggestions except sign up. I did start to follow bakery and coffee related accounts and the suggestions got better after a few days but not entirely. Then for reasons about a month ago I decided that our business shouldn't use Twitter anyway.

Incidentally when I log into my personal Facebook account here, since I have personalized ads turned off I get shown local Vietnamese language ads in the sidebar and for several months these have just been straight up hardcore porn. Loads of penis enlargement ads featuring close ups of actual penises, lots of "hot women in your area" showing way more than I want to see when I just logged in to check how my family back home in Europe are doing.

Just because you don't see unwanted stuff on social media, don't assume other people have the same experience. Facebook are notorious for basically doing zero content moderation in Asian countries (see Myanmar), and we all know what's going on with Twitter. I'm pretty sure that within a pretty short timeframe it's no longer going to be borderline underage stuff in non English speaking parts of the world.


> Facebook are notorious for basically doing zero content moderation in Asian countries (see Myanmar)

None of us know what kind of content moderation went on there. There is no public index of what gets removed, and according to Brandon Silverman, even Facebook itself does not review it:

> "violating content... in a lot of cases what happens is it gets removed, it gets taken down from the platform, and more often than not, essentially deleted, just disappears forever. A lot of that violating content is really important to the public interest, and it would be enormously valuable if we were able to create spaces for that content and the actors involved and the networks they create and build to be studied by an outside community, and an independent research ecosystem over time."

It's possible that whoever was removing content there was close to the military. We don't know. Other Asian countries could have similar influence. Isn't it accepted that Mark wants to bring Facebook to China? Maybe they wanted to see Facebook demonstrate its content controls before allowing them in. ¯\_(ツ)_/¯

[1] 16:45 in https://podcasts.apple.com/us/podcast/the-lawfare-podcast/id...


Well, I believe I said “porn is excluded or hidden from some contexts,” which is how Twitter handles it, at least within the U.S.

Anyway, the point is that moderation of porn is widely socially accepted because it aligns with values that are widely held. And if you want to understand why people want hate speech moderated (for example), you also need to look at it through the lens of values and beliefs that people hold.


Twitter does ban and restrict access for what it considers porn-like. In fact they seem to be relaxed on straight up hardcore pornography than marginal contents so to strengthen some sort of isolation.


The issue with porn on twitter isn't that adults may or may not be confronted to it. It's not about the adults.


The article explicitly makes the distinction between law and culture in free speech. Implicitly, that includes your concern about spam and porn, which are legally approved and culturally disapproved.

For example, the article points out that Apple could decide at any point to remove the Twitter app from their store for any reason. Such a reason could be that there is lots of porn on Twitter, and this could encourage Twitter to discourage porn on their platform, kind of like what happened with Tumblr.

As a consumer, I don't mind it if platforms censor spam and porn as long as there is a switch somewhere I can toggle that will let me opt out of it if I want to see spam and porn. That's just my personal preference and if enough other people express that preference then the culture around free speech will change.


> I don't mind it if platforms censor spam and porn as long as there is a switch somewhere I can toggle that will let me opt out of it if I want to see spam and porn.

In this sentence, you've gone farther than the article (and many similar posts) in defining how you'd trade off between free speech and other values. Extrapolating a bit, it sounds like:

(1) You believe "platforms" have responsibilities in free speech culture to make all submitted content available.

(2) It is ok for platforms to control default visibility of content based on what it perceives as pervasive values, as long as those controls can be overridden by users.

(3) You trust social and market competition among platforms to make sure there are platforms aligned with enough expressions of values that everyone gets speech.

I'd love to see a deeper dive on some of these points by cultural-not-legal advocates. Some questions I'd like to see vigorous discussion of:

- When does something become a platform, and start having responsibility to rebroadcast all submissions?

- How much friction is ok for a platform to introduce before it blurs the line with censorship? (Extra submission hurdles? Demonitization? Deamplification? Opt-ins vs opt-outs?)

- Are there categories of values that are ok to introduce friction around (e.g. porn) vs others that are not (e.g. politics)? How can we separate them reliably?

- What are the qualities of competition between platforms that need to be maintained to make sure the allowable friction reflects a range of cultural values?

As a speech-not-reach guy, my conclusion is that platforms are participants and inevitably express their own values through curation, so it's most important to keep competition alive at the platform level. However, I think there could be a better steel-man case for platforms having coherent responsibilities than I've seen. A lot of the discussions start strong and then devolve into breathless quotes about freedom.


With (2) and (3), I didn't mean to leap from is to ought right away. I think it's to be expected that companies that publish user generated content will choose to control visibility of content based on what it believes are the prevailing values of the various stakeholders like users, advertisers, regulating agencies and so on.

As a small stakeholder, I naturally would prefer a world in which the other stakeholders share my values, because that would make the companies more willing to do what I want. Right now that means I would like the culture of free speech to change in my favor.

If I lived in a place were regular people liked free speech but the government liked censorship, then I would want the laws of free speech to change, and the culture I'd believe to be fine.


Sorry, I forgot to directly answer the questions.

- I think a platform doesn't have a specific responsibility to rebroadcast everything. If I don't like what things they choose to rebroadcast I'll find them less useful and start using a different service.

- For content that I don't want to see, they should introduce any hurdle they want. I am only annoyed with censorship when they get in between the sender and the receiver without asking the receiver first. For example, censoring spam and porn is fine when done at the request of the user who would receive the spam and porn. Censoring misinformation is less fine because it has to be done without permission of the receiver. The receiver may be gullible and stupid, and then it looks like censoring misinformation is good. But sometimes the receiver is smart and better informed than the censors, and it's not easy to tell in advance.

- Same as the previous, the categories for which it is OK to introduce friction are those categories that the user who would receives the messages asks you to censor. For example when an ad is irrelevant there is often a button you can press to tell the platform that you don't want to see those kinds of ads, and then they start showing different ads. I would like something similar for spam, porn and misinformation.

- I don't know about competition, network effects seem very strong. Instead of having a special network only for special people who like free speech, I would prefer to change the wider culture so that the mainstream social networks support free speech. It's either that or wait for some crazy billionaire who happens to value free speech to buy the mainstream platform? Seems unreliable.


Belated thanks for answering here!


I think about it in terms of pro- and anti-social speech. Free speech is about pro-social speech, not anti-social. The way I see it, people conflate the two types of speech.

Citizens arguing about ugly topics like identity politics, conspiracies, coverups, etc. are de-facto pro-social - they are trying to sway public opinion on political issues, ostensibly to make their society better. It becomes anti-social when those arguments are being pushed by outside parties (Russian/Chinese/corpo propaganda) or grifters.

Pro-social speech can also become anti-social when people get too heated and start attacking others based on their beliefs. It's difficult to deal with speech that is both pro- and anti-social, in terms of trying to convey a sincere argument while also being toxic to those who don't agree. In cases like that, "rules of engagement" or a code of conduct should be implemented.

Anti-social speech is to say things without the goal of construction/progress, but instead with the goal of destruction or abuse. Spam and porn are anti-social in certain contexts and are treated as such, the same way drug use and swearing is (or maybe used to be).

Assuming you believe in the thesis that free speech prevents social collapse and totalitarianism, then it doesn't matter whether you disagree with or even hate certain arguments/views, if they are sincere then they are pro-social and should be given some platform to be heard and interacted with in the mainstream.


I think this is the dangerous opinion that led us down the censorship path that the article is decrying. “Pro-social” and “anti-social” are subjective, not objective.

To a religious conservative, it would be pro-social to say “marriage should only be between one man and one woman”. To many progressives, this would be “anti-social”.

Here’s a conundrum. Is “we should kill all nazis” pro-social or anti-social? On its face it seems to promote violence, so maybe anti-social, but the people it’s targeted at themselves support horrific views.

As a Gen X libertarian, I am pretty much a free speech absolutist (I support minimal legal limits such as CSAM because there must have been victimization involved). It boggles my mind that so many young people today don’t support free speech; tides of culture and government change and if you allow free speech to slip away when trends support suppressing ideas you oppose, it won’t be there for you when ideas that you support are in the crosshairs.


AFAIK Musk has also characterized himself as a free speech absolutist but has now re-suspended Ye.

I’d be curious to hear whether you think that this suspension is compatible with free speech absolutism? Put differently, was Musk simply marketing himself as an absolutist (to win over following) but is in reality a pragmatist, willing to bend his stance when he views it in his own or society’s best interest?


For now, Twitter can ban whoever it wants, but I don't think that banning Ye really aligns with free speech absolutism. Mandatory disclaimer: I found Ye's comments highly offensive (and stupid), but that is not my standard for censorship.


Yes. That much was obvious when he kept Alex Jones banned over an anecdote. It's more of a marketing tactic, not an exercise of morals.

If liberals already hate Elon, then the best course of action is for him to pander predominantly to people on the right. Right now, the best way to do that is by advertising free speech.


It's not subjective at all, it's a judgement of whether speech aligns with cooperation or attack as understood in human nature within tribes/communities.

Pro-social doesn't mean "positive" or "good" or "correct", it means that it's an attempt at progress or improvement for the community, from some sincere point of view.

"We should kill all nazis" is pro-social in terms of sincere critique, and anti-social in the threat of wanting to kill them. If we're at war with nazis then calling to kill them is likely mostly pro-social. If they're part of our tribe, whereby you are threatening your own people, then it's definitively anti-social. "We shouldn't accept nazis" is fully pro-social, and so is "we should all be nazis".

We can instinctively tell when someone is trying to be cooperative or offensive (attacking, deliberately destructive) to us in some way. We have the capability of recognizing it even if there are cultural/moral/logical (epistemic) divides. We just have to take the time to understand the other party's perspective and situation, then we can categorize their behavior.


And yet, organizations like FIRE will happily defend the "speech rights" of TPUSA students who take a class taught by one of my trans friends over and over just so they can write hate speech in legally protected coursework with the full intention of failing the course.


In what manner were they doing this? ie. were the students making counter-arguments during the course in a civil manner, or were they attacking the professor in an abusive manner?

I'm assuming what they did was something like write "There are only two genders" in an exam paper as a form of protest. If that's the case then I'd say what they're expressing is pro-social, but they're doing it in an anti-social manner, and should have been reprimanded on that basis.

And, assuming again that that's what happened, then it's definitely not "hate speech" - which is exactly what I just tried to differentiate in my previous comment.

But reading about this case, it seems that FIRE was trying to get TPUSA to be allowed as a student group on campus: https://apnews.com/article/scranton-28d927628ee14bf5aefbaaa7...

There's nothing there about them trying to defend students joining courses and being toxic in them.


No. They would write considerably more extreme things than that.

This isn't taking place in course discussion. The course has nothing to do with trans people.

They take a course in a topic they do not care about that is taught by a trans person (or a gay person or a woman). They deliberately write statements specifically targeting the status of the professor in an extreme way but in legally protected places. They do this in an attempt to get the professor to react in some manner so they can sue to school for discrimination against conservative students. They are clearly coached by a legal team because 18 year olds do not actually understand the precise legal boundaries they can walk up to.


Then yeah, that seems totally anti-social and should be shut down.


What is pornography, exactly?

Ok, maybe spam can be defined more or less formally (but you will struggle to define "advertisement", I'm afraid, so, maybe spam has same problem).

But "pornography" is big can of worms. Each culture and each person have its own definition, and global platforms are, errr, global.

As result we can not post classical art to Facebook because these horrible woman nipples, you now. Pornography.


The fact there are shades of grey doesn't mean black and white don't exist, though.

A social media site needs a sensible response to Michelangelo's David - hopefully not censoring it - but also needs a sensible response to Goatse getting upvoted to the front page by trolls.


I see two problems in your example:

(1) Where will you draw the line? Who will you ask? Will you ask Iranian or UAE users? Will you ask USA users (and USA is very puritan country, even in 2022, IMHO)? Will you ask users from regions of Africa where bare woman chest is norm (Bonus question: will you allow selfies of these users)? Why? Why not?

(2) Why can trolls upvote something to the front page of user? Why did something, not posted or liked by user's "friends" ("subscriptions", "connections", you name it), appear on front page of logged-in user? It is social network, not news portal ;-) And if over-the-line content is liked by user's "friends", user can unsubscribe at any moment, it is their decision to add content from offender into their feed in first place.


Greg Lukianoff also suggested drawing the content moderation line at legal speech [1].

> Why does this matter? Because free speech, free expression in general, is a means to express values and beliefs.

Saying this does not convince people who are concerned about hate speech. Such individuals imagine that the government or the platform will, or can be coerced, to simply remove whatever they do not like. These folks, who consider themselves among the majority, never imagine that they will one day be a minority, or that such power will then be used against them. The truth is that all of us are being moderated all the time on social media, but that's hard to demonstrate in all cases at all times, which is why the secrecy of such moderation tends to be effective for a period.

Drawing the line at legal speech is important is because words are not violent, and moving it anywhere else leads to more disagreement. Other people's words, which you may find offensive and which may cause real psychological harm, are still discretionary. Some may find them harmful, others will not. Laws aren't supposed to be subjectively interpreted. The exception for when speech can be punished by law is defined by Nadine Strossen as words that "in context, directly causes specific imminent serious harm" [2].

> No one values spam, so no one writes hand-wringing essays about whether it is ok to kick spam and spammers off social media.

That's not really what happens though. Spam is a useful mechanism to get platforms to build more tools that secretly remove content. I mention it in my talk at 28:00 [3]. These new censorship tools are mostly used to suppress speech from individuals, not spammers.

> Most people recognize that porn, even if they like it, is not something that should be in everyone’s faces all the time, so there is little existential concern for free speech as a whole when porn is excluded or hidden from some contexts.

That's not true, "obscenity" has always been under attack. Look up Anthony Comstock [4] [5], a largely successful crusader against all things he found obscene. He didn't stop at pornography, he went after literature describing contraceptives, abortion, and even people who just criticized him. It was just like how today's radical trans movement seeks to remove voices of detransitioners from social media [6]. It doesn't fit their world view, they find it offensive and they don't want you to see it. That said, there is broader agreement about keeping pornography away from children.

The vast amounts of censorship today, most often secretly done, cuts out the middle and enables both extremes to isolate themselves in their own bubbles. Instead of trying to baby-proof our world, we should be world-proofing ourselves [7]

> You cannot expect to make sense of free speech if you try to ignore other values.

Nobody's saying we should ignore other values. If you hold this view then you've misinterpreted the constitution, whose 9th amendment [8] states that rights are not to be held in conflict with each other.

What should be said here is, you cannot make sense of free speech if you do not examine its history and how it's been relentlessly defended, not just in the US, but everywhere possible for arguably the whole of human history, with varying degrees of success.

[1] https://www.thefire.org/news/blogs/eternally-radical-idea/op...

[2] https://books.google.com/books?&hl=en&id=whBQDwAAQBAJ&q=in+c...

[3] https://cantsayanything.win/2022-10-transparent-moderation/

[4] https://en.wikipedia.org/wiki/Anthony_Comstock

[5] https://play.google.com/store/books/details/The_Mind_of_the_...

[6] https://archive.ph/1NeiV#selection-1513.7-1513.57

[7] https://www.bendigoadvertiser.com.au/story/76283/world-proof...

[8] https://constitution.congress.gov/constitution/amendment-9/?...


Just FYI, hate speech being free speech is an American thing. Other democracies in the world have frameworks for limiting/prioritizing basic rights, right to free speech included (take, for example, German moderation laws) and have not become a totalitarian dystopias yet.

One opinion I've heard in Germany is that the free speech absolutism and value neutrality of the constitution of the Weimar Republic allowed NSDAP to take power in the first place.


> hate speech being free speech is an American thing

It's the constitution, innit? It's a confusing document, with many compromises. But many US citizens treat it like the Shroud of Turin. The German laws are tied up with Germany's history in the 20th C. In other parts of Western Europe, restrictions are less stringent and less specific.

Obviously there are types of "harmless" speech that are likely to provoke a violent response. Personal abuse is an example, but so is the language of the likes of Enoch Powell's "rivers of blood" speech. Do people really think that speech that's likely to cause public disorder is OK?

I like the author's distinction between "cultural" and "legal" freedom of speech. I instinctively dislike restrictions on speech; but provided they are used mainly to nudge people into treating their fellow citiziens with respect (and with an explicit exception for satire), I see them as an overall good. That is, you can use legal restrictions, with a light touch, to encourage an atmosphere of cultural restraint.

Arresting that guy who protested Charles III's accession by holding up a blank sheet of paper: that's not a good thing.


> > hate speech being free speech is an American thing

> It's the constitution, innit?

Is it? I wasn't aware that hate speech was explicitly singled out as protected in the Constitution.

> Do people really think that speech that's likely to cause public disorder is OK?

Public disorder, no. But innocent people getting heard? Well, maybe. Lots of speech that hurts innocent people is already banned. Things like libel/slander. Threats. Blackmail. All of these can hurt someone, silence someone, inspire someone else to hurt someone. I don't think anyone ever objected to these being banned, and even punishable with real prison sentences.

> Arresting that guy who protested Charles III's accession by holding up a blank sheet of paper: that's not a good thing.

Did that happen? That's a silly thing to arrest someone for. It calls back to Putin arresting people for holding up blank sheets of paper.


> Things like libel/slander.

Those are civil offences; they're not "banned", the victim has to sue for damages.

> Threats. Blackmail.

Those are indeed banned, i.e. criminal. They are both effectively threats. It's a criminal act to threaten a criminal act.

> Did that happen? That's a silly thing to arrest someone for.

Oh yes, it did:

https://www.standard.co.uk/news/uk/police-arrest-blank-paper...

"Silly" is rather a gentle way to characterize it; it was oppressive. But this country is very silly about it's monarchy - as the USA is about it's constitution.

I am a republican (small 'r'), and using the police to suppress anti-monarchist sentiment is offensive, and to me scary. Anti-monarchist opinions have a long tradition here; there was some other king called Charles, that we decapitated 400 years ago.

[Edit] No threat to Charles III intended; got to be careful what you say, if you can be arrested for holding a blank piece of paper, maybe you can be arrested for having a blank mind.


> Those are civil offences; they're not "banned", the victim has to sue for damages.

Is that not still a form of punishment? And a reason why most people would refrain from saying these things?

> > Threats. Blackmail.

> Those are indeed banned, i.e. criminal. They are both effectively threats. It's a criminal act to threaten a criminal act.

A threat is a threat to do something illegal. Blackmail is a threat to do something that's often completely legal, but sufficiently embarrassing that it would still make the target change their behaviour.

But as with libel and slander, the fact that they're illegal doesn't mean they don't happen. In fact, there's plenty of people threatening others online. And often it takes a judge to decide whether or not that threat counts as a credible threat. And then there's indirect threats, where the threatener doesn't directly threaten, but directs others to do it. Possibly in an unspoken way. And sometimes the vague things someone says can still inspire a mob to storm the Capitol with a gallows from which to have the Vice President.

Where do you draw that line? And can you only draw it after people act on it? These issues are incredibly complex, and very context dependent. And pretending they're simple is going to make us ignore all the edge cases. And there's a lot of those.


> Is that not still a form of punishment?

No, it's not. It's compensation for damage done. If a bad driver bends your car in an accident, they don't get punished, they just have to pay for your repairs.

> A threat is a threat to do something illegal. Blackmail is a threat to do something that's often completely legal

You seem to have contradicted yourself; if a threat is a threat to do something illegal, how can a threat to do something completely legal be a threat?

> But as with libel and slander, the fact that they're illegal doesn't mean they don't happen.

There is such a thing as criminal libel, for which you can be punished; but in the general case, neither libel nor slander is "illegal". They are torts, or "wrongs", for which a victim can demand compensation. Criminal libel has been abolished in my jurisdiction. I don't think it ever existed in the USA. [Edit] I'm wrong; some states apparently have criminal libel statutes.

> And can you only draw it after people act on it?

"Threatening behaviour" is a criminal offence here. It generally means placing someone in fear of physical or mental injury. You don't have to actually injure them to commit the offence.

> make us ignore all the edge cases

I guess that's why libel cases often cost lots of money in lawyer's fees. Equity and tort are two fields of civil law with lots of edge-cases. People litigate these cases precisely because the lines aren't drawn clearly. They tend to turn on issues like fairness, issues that are ultimately a matter of judgement.


> It's compensation for damage done.

But you're still being held liable for that damage, despite the fact that you did not cause any physical damage. It was just words, and yet their effect is considered damaging.

That's aside from the fact that there's also a thing called "punitive damages", so that still is quite explicitly punishment. So these are words that do damage, and they're punished for.

> if a threat is a threat to do something illegal, how can a threat to do something completely legal be a threat?

I guess I should explain the terms I use a bit better. You're right that words can mean different things, and I'm talking about the legal concepts of threat and blackmail, but in my too-brief explanation of blackmail, I use the word "threat" in one of its other meanings.

So here's what I mean by threat and blackmail:

Threat: expressing an intend to hurt someone. Here, hurting someone is illegal, but expressing the intention to do so is illegal as well.

Blackmail: expressing an intend to reveal embarrassing information about someone unless they do something, in order to coerce them to do that thing against their will. Here, revealing the embarrassing information might be totally legal. In fact, if the information is about a crime the target committed, revealing it might even be the right thing to do. And yet saying you're going to reveal it unless they do a particular thing, is illegal.

In any case, I think we have established that there are forms of speech that are illegal, are considered damaging, and/or are sufficiently harmful or damaging to warrant compensation or punishment.

So to get back to the original topic: is hate speech damaging? And is it sufficiently damaging to warrant compensation or punishment? And if we were to conclude that they are, legislating that is not necessarily a more significant infringement on the freedom of speech than the existing laws against libel, blackmail and threats are.


> But you're still being held liable for that damage, despite the fact that you did not cause any physical damage. It was just words, and yet their effect is considered damaging.

It is a complete defence to an accusation of libel that what you wrote is true.

That is, libel means publishing damaging lies about someone. In most jurisdictions, it also means that the lies were malicious: the writer or publisher intended to cause damage. Damages are assessed as lost money; if the lies hurt your feelings, you won't get damages. But if your hurt feelings required therapy, you can sue for the therapist's fees.

It's annoying (understatement) that damage to reputation is assessed in this way; it means that rich, famous people get much larger awards than ordinary people, because they lose more money from damage to reputation (they have more money to lose, for one thing). You can't sue someone for torpedoing a $1M deal if you aren't the kind of person that makes $1M deals.


> It is a complete defence to an accusation of libel that what you wrote is true.

Obviously. It would be a bit too dystopian if the truth was illegal (though in oppressive regimes it often is). I agree that it's a problem that harmful lies about rich people are punished harder than lies about poor people, because they can attach a larger monetary value to the damage. That's of course a product of the fact that it's a civil issue and revolves around damages. Getting rid of that artefact would probably involve making all lies illegal, and I think everybody here agrees that that would be a couple of steps too far.

And then there's the fact that the entire justice system is simply far more accessible to rich people. And they do sometimes use it to try to suppress inconvenient truths.


As an aside, I love it that blank sheets of paper turned into the go to way to message a protest. You can't get less menacing and a stronger "the emperor has no clothes" message than that.


> Things like libel/slander. Threats. Blackmail.

Libel and slander are civil matters in the United States.


Germany has freedom of opinion, not freedom of speech. While they're generally understood (in Germany) to be the close to the same, they're not understood the same outside of Germany.

You're allowed to think whatever you want, but you're not allowed to publicly say it, unless it's within the bounds of legal opinion (which are pretty wide, it's not like you're only allowed to state one opinion, but they are also clearly limited). Insults, while clearly speech, are forbidden and it's part of the penal code (up to one year in prison, but it's usually settled by fines), so not a civil matter.

What Germany certainly doesn't have is a culture of free speech. It's important to remember that the last totalitarian dictatorship in (East) Germany only ended 30 years ago, and there was very little freedom of speech there, and the limits were violently enforced.

German culture highly values conformity, and it's no accident that the spiral of silence was proposed by a German researcher (Elisabeth Noelle-Neumann), it essentially argues that people check their views for majority-compatibility and will stay silent if they find that they're not accepted. Because they stay silent, others who think like them will find their ideas not accepted in the majority and will also stay silent etc etc, as an act of social survival (which it certainly is in Germany, but isn't as much in countries with a culture of free speech).

> speech absolutism and value neutrality of the constitution of the Weimar Republic

The Weimar Republic had largely the same fundamental laws regarding freedom of speech as the Federal Republic of Germany today: you're allowed to say what isn't outlawed by a law, and you must not be punished for saying it.


Thank you for the insights. I'll only disagree with the last statement. Weimarer constitution was value-neutral as opposed to Grundgesetz today - meaning an NSDAP leadership or NSDAP as a party was legally compatible with the the constitution back then. Contrast that with Art. 1 Grundgesetz today - "Human dignity is inviolable. It is the task and duty of the state to protect it" - that specifically was introduced to prevent any inhuman ideology from taking hold.

So in legal practice the right to human dignity overrides the right to free-speech.


Not advocating for NSDAP, but do you think it's worrying that one government would seek to make illegal, speech which could make people interested in a different government? That doesn't seem democratic to me.


You're talking about the Paradox of tolerance. I don't see why it is necessary to provide a democratic platform to undemocratic or misanthropic ideologies. I think it's misguided purism.


That’s all fine as long as I get to decide what is considered I democratic or misanthropic. :-)


Not PP but in my opinion there is a big difference between how the law addresses speech that makes people interested in a different government and speech that promotes extreme ideologies.

For example, if you look at countries like Russia, China, Iran, Myanmar, etc... what's illegal or repressed is any promotion of opposition to the government. This is plainly anti-democratic & authoritarian.

Whereas in a lot of Europe it is illegal to promote only extreme ideologies but apart from that the democratic discourse is as messy and varied as you'd expect it to be. (And due to variety of electoral systems I'd argue that political discourse in much of Europe is mush more varied than in places like the US.) We have countries swinging from left to right wing governments and even powerful political parties that started as far right wing or terrorist groups.

What we all see is how the balance can be tipped from fragile democracy to repressive authoritarianism e.g. Hungary.

Democracy is a constant effort, requiring constant vigilance. These are all discussions worth having, at least in good faith.


How laws are sold to the public are not how they get applied. Nixon's 1971 Federal Election Campaign Act, meant to regulate campaign finance, was immediately used to curtail speech critical of Nixon.

Here's former ACLU executive director Ira Glasser in a podcast at 1:49:10 [1], transcript [2]:

> Times writes back and says, we can't publish this ad because we – because you criticize Nixon and it's 1972, and it's in the middle of an election campaign, and if you criticize Nixon, unless you have his permission, it counts against the ad – the expenditure limitation of his opponent. I said, I have to get his permission in order to criticize him, what are you, crazy. It's the same issue that Cambodian bombing ad...

> So, we file a lawsuit, we win that lawsuit, and we strike the law down for the second time. This goes on over and over again, and I'm saying to myself, this is a law that is supposed to get in the way of nasty rich folks, and the only two times it's been used so far is against these three aging radicals and the ACLU on totally legitimate, core First Amendment speech, how can that be, and that's how we get into this issue, and why we see it so clearly as a free speech issue. It goes on and on and on and on, and they kept coming back, doing it again, and again and again and again in 16 different versions.

Earlier in that podcast episode he makes a good case for campaign finance being like the right to travel around 1:36:10 [3]. Have a listen, you might learn something. The embedded player on this page [4] lets you skip around the episode without subscribing to Soundcloud.

[1] https://soundcloud.com/so-to-speak-the-free-speech-podcast/f...

[2] https://www.thefire.org/news/so-speak-podcast-transcript-for...

[3] https://soundcloud.com/so-to-speak-the-free-speech-podcast/f...

[4] https://www.thefire.org/news/so-speak-podcast-former-aclu-ex...


> How laws are sold to the public are not how they get applied. Nixon's 1971 Federal Election Campaign Act, meant to regulate campaign finance, was immediately used to curtail speech critical of Nixon.

This is an argument against laws, corruption, and government. It is not, however, an argument against any particular position on speech. A capricious supreme court could revert to the speech precedent the US had for the first 150 years of it's existence, which we both presumably agree is flawed. Laws can be written and misapplied.

Neither of those things being possible suggests that, as a value, free speech maximalism is superior to free speech...almost maximalism. It's clear that you can have stability and discussion and support for fairly extreme ideologies under the more restrictive speech standards of Europe.


> This is an argument against laws, corruption, and government.

No, it's an argument for studying the history of law and understanding that some are good and some are bad. They don't come with a cover sheet saying so. You need to figure it out for yourself.

> Laws can be written and misapplied.

All the more reason to study them and give careful consideration to how they have been misapplied in the past. We're in agreement there.

You're a different commenter than above, so I'm not sure if your main point is to agree with them, disagree with me, or both, and on what exactly.

> It's clear that you can have stability and discussion and support for fairly extreme ideologies under the more restrictive speech standards of Europe.

I think any anti hate speech laws are more experimental than the American experiment. The point of free speech is to protect minority voices. It was championed in the '60s when the left needed it, and it rose from the ashes in the early 1900s after Comstock tried to surreptitiously-but-not-so-surreptitiously squash it. The censor never calls himself a censor.


> I think any anti hate speech laws are more experimental than the American experiment

The modern American jurisprudence on speech is fairly recent, as I alluded to above. There's tons of cases through the 18th and 19th century where the supreme court upheld blasphemy and obscenity convictions, allowed the federal government to censor speech, favor Christian religion and churches over others, and even consider libel a Crime.

As late as 1951 (Dennis v. US) the Supreme Court ruled that members of the US Communist Party could be imprisoned because socialist organizing was a threat. And it took 20 more years for Brandenburg to cement the modern jurisprudence, 200 years after the American experiment began, and only 50 years ago.

> No, it's an argument for studying the history of law and understanding that some are good and some are bad.

No, my point is what is socially acceptable goes beyond simply what is legal. And this is always and has always been true. Many laws can be used for tyranny. If you want to avoid the potential for tyrannical laws, you can't have any laws, and that doesn't work either.

Like again, this whole thing comes down to the fact that Western European countries (and the US!) have had more restrictive definitions of free speech for far longer than Brandenburg has been the law of the land, and they aren't tyrannical.


I'm quite lost on your overarching point.

Do you agree with how US jurisprudence has drawn the lines on free speech? If not, what would you do differently?


The concern that people have is that the definition of what is "extreme" tends to creep, in one direction or another, in western democracies. Perhaps that's a feature of democracy itself, from a certain point of view, but for example the nominal opinion in the US on immigration in the 1990s is not really that different than that of today's average Trump voter. Democracies tend to have short memories when it comes to their own history.

>...what's illegal or repressed is any promotion of opposition to the government. This is plainly anti-democratic & authoritarian.

What happens in democracies (or at least in the US) is that this just gets branded as "mis" or "disinformation", or worse a "conspiracy theory". A formulation I'm fond of is "this isn't happening, and it's good that it is", because it's a pattern repeated throughout political discourse. Depending on who is talking, something is either a crazy extremist conspiracy theory, or something we should all be cheering on.


> The concern that people have is that the definition of what is "extreme" tends to creep, in one direction or another, in western democracies. Perhaps that's a feature of democracy itself

It's a feature of humanity from where I stand. People crave both stability and change. Beating them over the head to force stability doesn't work so well, nor does changing too quickly. The idea of a system that encourages peaceful transitions is to find some balance. Of course, that balance can also be disrupted.


the Overton window basically. That's where the constant vigilance and discussion comes in, to keep open the possibility of moving the window. for example, I still hear a LOT about conspiracy theories, anti-vaxx, etc... even though I don't want to. It's basically two sides shouting as loud as they can that the other side is talking nonsense, just one side is the government that the majority of people voted for and support, so gets a lounder voice. Anyone equating how speech about covid restrictions or vaccinations was handled in most of Europe vs true authoritarian repression of discussion about changing governments in China, Iran, etc... is hyperbolic, disingenuous or ill-informed.

Societal totalitarianism is a very real concern. In its benign form it leads to quirky cultures and in its extreme leads to total suppression of any opposition to the zeitgeist, even without a fully authoritarian government. Unfortunately there's a massive fuzzy grey boundary between the two and it's up to the people of a country to course correct when they can, as getting back from a restrictive society is difficult.


>Societal totalitarianism

This is a good phrase; I'm stealing this. I agree with your diagnosis of it, as it can indeed be benign and result merely in "people being weird".


> Just FYI, hate speech being free speech is an American thing. Other democracies in the world have frameworks for limiting/prioritizing basic rights, right to free speech included (take, for example, German moderation laws) and have not become a totalitarian dystopias yet.

Sure, and America hasn't become a country of hate speech or devolved into tyranny yet either, despite what some claim, so do those laws actually add any value? Seems speculative at best.

> One opinion I've heard in Germany is that the free speech absolutism and value neutrality of the constitution of the Weimar Republic allowed NSDAP to take power in the first place.

Some of their first moves after taking power was crack down on speech and imprison people that spoke out against them. What would have happened if free speech was a strongly held principle among the people such that they wouldn't have tolerated that? What if dissenting voices had been permitted to speak about and expose the secret death camps?


> Sure, and America hasn't become a country of hate speech

Hate speech spewed by the likes of Tucker Carlson caused a mass shooting like a week ago...


I'm sure that's one interpretation. And yet Tucker is the most watched cable news show last I checked, so clearly most Americans don't agree with your characterization of his speech.

Furthermore, drawing a causal link between Tucker and the shooter is speculative at best.


"Most watched cable news show" does not equal "most americans"; let's just clear that up before we move on...

Also, the people who watch his show agree with his speech and probably don't consider it stochastic terrorism because they also want to genocide trans people. It's not a hard concept to grasp. It's a hate-hour for a very specific slice of the population. And that slice is committing more and more acts of violence.

Do I think Tucker is DIRECTLY responsible for the shooting? Maybe not, like you said it's speculative. But do I think there is a casual link between the entire "Trans people are evil" rhetoric crowd and the shooting? Absolutely, I think it'd be difficult to argue the other direction, we have studies and statistics on trans hate crimes already.

And then do I think Tucker Carlson is part of the "Trans people are evil" rhetoric crowd? yes, hard to deny that imo. And you said it yourself, he's the largest voice out there in that crowd. So I think he definitely shoulders some blame.

Also, speculation turns out to be more and more true each time. conservative talk show hosts are cited in conservative domestic terrorists manifestos all the time. This "hour of hate" is absolutely directly contributing to violence.


Tucker disagrees with the extent of some liberal ideas. That does not make him responsible for others taking his beliefs, magnifying them, and committing atrocities.

If bluntly disagreeing with progressivism makes you blame-worthy for mass-murderers, well then we should throw all conservative media hosts in indefinite prison. Or execute them.


Uhhh, he magnifies his own beliefs without the help of others.

"The extent of some liberal ideas" is a really soft way of putting "open racist and transphobe" He literally did a segment on the "Great Replacement Theory".

He's a white supremacist. He can get absolutely, thoroughly, and without lube, fucked.


Sometimes I wonder if people like you and I even experience the same reality.


No, we experience the same reality. You're just a white supremacist apologist, at best, and an actual white supremacist at worst.

Tucker Carlson thinks that immigration is ending the white race. That's a direct quote, go watch the segment.


You seem like a positively delightful person. Thanks for forcing me to watch my first Tucker Carlson segment since this keeps coming up [1].

I didn't hear this "direct quote" of yours anywhere. Tucker doesn't mention the "white race" at all, he speaks about government policies and how they influence native population growth (Americans born to Americans), and about the rate of immigration. If I missed your direct quote, then please provide a timestamp. While I find many of the supposed factual claims there debatable, this argument that too much immigration can be destabilizing is not obviously false.

What I've seen time and time again in situation like this, from both sides, are claims of "dog-whistling", wink-wink-nudge-nudge "we know what you're really talking about [insert-person-I-hate]".

I'll also note that "demography is destiny" has been an unofficial Democratic party tag line for decades now. The notion that Democrats have more vested interest in increasing immigration is not exactly a conspiracy theory, although given how the Latin population is now leaning Republican I think they're realizing that that tagline is deeply flawed as an actual strategy.

[1] https://www.foxnews.com/video/6309741334112


https://www.independent.co.uk/news/world/americas/us-politic...

Here's a short one minute blurb, should be digestable enough for you.

"Legacy Americans" "The Great Replacement"

These are not dog whistles. They're not subtle. He's just being racist.

> You seem like a positively delightful person.

At least I'm not a racist xenophobe.


Yeah good call. I'm actually an evil racist.


> Also, the people who watch his show agree with his speech and probably don't consider it stochastic terrorism because they also want to genocide trans people.

This is what happens when you form ideas divorced from any evidence. More Democrats watch Fox News and Tucker Carlson specifically than any of the more liberal-leaning networks. Do most Democrats want to genocide trans people too? Or are you claiming they're watching Tucker ironically?

> And then do I think Tucker Carlson is part of the "Trans people are evil" rhetoric crowd? yes, hard to deny that imo.

It's easy to deny it actually, he has never called for violence against trans people to my knowledge, and in fact has always condemned violence. People are not so circumspect if they think they're legitimately fighting evil.


Article asserts that censorship attempts by the Weimar Republic against the Nazi party empowered the Nazis, and gives examples.

But I am not a historian and cannot reconcile the articles content with what you’re stating here.

https://www.thefire.org/news/blogs/eternally-radical-idea/wo...


> But I am not a historian and cannot reconcile the articles content with what you’re stating here.

You're as free as any historian to form an opinion and share it. The rest of us can take the fact that you're not one into account while reading it.

Thank you for sharing a link to this series, I was not aware of it.


I mean, it is usually best to find experts and listen to them rather than just forming hunches.

Imagine saying this about microbiology or something.


It's fine for people to have opinions about science and to express them. People learn through interaction.

Dismissing people by saying things like "imagine joe six pack sharing his opinion on microbiology" is part of what made the anti-vax or vax-skeptical crowd so upset. In many circumstances, the rest of us did not meet them where they were. That requires listening, and yes, encouraging uninformed people to express their opinions.

It can be popular to behave dismissively online because it seems like you don't need to deal with the aftermath. When everyone does it, that adds up.

I'm not saying everyone needs to be out there correcting misinformation, just that we deal ourselves a better hand when we accept that other people get things wrong. It's okay for people to be wrong, even in the absence of being corrected. Telling people to shut up doesn't nip the virus in the bud, it is the virus.


Didn't know that, thank you. I was speaking from the perspective of constitutional law as I know it.


>One opinion I've heard in Germany is that the free speech absolutism and value neutrality of the constitution of the Weimar Republic allowed NSDAP to take power in the first place.

I hear that a lot nowadays too. I wonder why that is?


> Just FYI, hate speech being free speech is an American thing. Other democracies in the world have frameworks for limiting/prioritizing basic rights, right to free speech included (take, for example, German moderation laws) and have not become a totalitarian dystopias yet.

There are plenty of critics living in those countries who think it's a bad idea to limit free speech beyond limiting imminent incitement of violence. Nadine Strossen discusses this in, I believe, her appearance on the Higher Ed Now podcast [0], including this excerpt,

14:20: "Countries like Germany which were in the vanguard of enacting and enforcing laws against hate speech were in the rear guard when it came to outlawing actual discrimination against people in the workplace, in places of public accommodation in housing. It wasn't until very shockingly recently that Germany was dragged kicking and screaming by the EU to adopt those laws… targeting expression is only a superficial manifestation of a deep-seated problem, and it also is divorced from the actual real world consequences of discriminatory and violent conduct."

> One opinion I've heard in Germany is that the free speech absolutism and value neutrality of the constitution of the Weimar Republic allowed NSDAP to take power in the first place.

The Weimar Republic had hate speech laws. Nadine makes the case [1] that photos of Hitler imprisoned for running afoul of those laws helped him rise to power.

It's not so unlike Milo receiving attention for being prevented from speaking at campus. The protesters do not realize their dogmatism and censorious behavior bring Milo more followers. Protesters should consider how their behavior will be perceived [2] by someone who knows nothing other than the fact that one person wants to speak, there is a willing audience, and that protesters either: stopped that conversation from happening by being noisy, or responded with counter speech and letting haters bury themselves [3]. If you are shouting over someone, you become a hater. It doesn't need to be that way.

[0] https://podcasts.apple.com/us/podcast/nadine-strossen-free-s...

[1] https://youtu.be/J1iZffRFs8s?t=2838

[2] https://www.youtube.com/watch?v=E0T9XSG73kY&t=4889s

[3] https://youtu.be/E0T9XSG73kY?t=3083


Have to be honest, I didn't know about hate speech laws in Weimarer republic.

But in terms of equality, I'd say what we get in practice *today* in Germany and Switzerland (don't know about Austria) is a society that is arguably *more* equal than the vanguard.


And yet people only have so much time, and so much of it continues to be dominated by work and transportation to/from work (if you’re not lucky enough to have the option of remote work). When you’ve been busy and focused for the better part of 10 hours, bettering oneself for immaterial gains is going to be unappealing when you could doom scroll TikTok, watch the latest hit TV series, or even better yourself in areas with easily-realized gains, like physical workouts or educating for a career change. That’s why people opt for moderated communities, actually learning to handle content and ignore repugnant or vile content takes diligence and building that up is time-consuming, so they’ll happily pay money (via ads or directly) to offload filtering out that content.


> That’s why people opt for moderated communities, actually learning to handle content and ignore repugnant or vile content takes diligence and building that up is time-consuming, so they’ll happily pay money (via ads or directly) to offload filtering out that content.

The catch here is that people are not aware that their content is frequently shadow moderated. Even if you do try to engage with vitriolic commentary, your comment may be removed without your knowledge, and the forum will still show you your comment as if it's not removed. It works like this on Reddit for all comments [1], but again as I said, all forums now engage in shadow moderation. And at the end of the day it is also an us problem, not just a them problem, because we're often the ones reporting the content that gets taken down. When people are made aware that this is happening, they do call for change, for example [2],

> ...what is the supposed rationale for making you think a removed post is still live and visible?

> ...So the mods delete comments, but have them still visible to the writer. How sinister.

> ...what’s stunning is you get no notification and to you the comment still looks up. Which means mods can set whatever narrative they want without answering to anyone.

Here's another way to think about whether or not to use shadow moderation. If you think it's a good idea to use it, then you think it's an effective tool for changing minds. Your ideological opponent, whose comments you want to secretly hide (lest they give you more trouble), is, by your definition, immoral. The extreme left and right both think this of each other. And since they are immoral, they will make more immoral use of this tool than you do, thereby effecting more influence than you, which is precisely the opposite of what you set out to achieve. Therefore the only winning move is not to play [3]. You cannot support the use of a tool in the world without your opponent eventually getting their hands on it. So it's really a game of: who can make the first truly free speech forum that encourages rather than secretly shuts down debate?

[1] https://www.reddit.com/r/CantSayAnything/comments/ynfu22/wri...

[2] https://www.reveddit.com/about/faq/#react

[3] https://youtu.be/MpmGXeAtWUw?t=74


> The catch here is that people are not aware that their content is frequently shadow moderated

Is that really the catch? Reddit didn't get to where they are today by saying "we totally allow free speech". Since its inception, it's been a trope that Reddit actively moderates away actively harmful content, and even constructive-but-contrarian content to the point where every small subreddit community is a circlejerk/echo chamber for its own ideals. Anyone that just downloads the app might not know it at first, but if they actually dive into creating an account and joining subreddit communities, they'll learn soon enough since moderators try not to make their own community mad by silencing all criticism of themselves (when they do, it tends to be exposed pretty quickly), and any huge shifts in policy tend to sprout divergent communities, like how r/superstonk was created out of a wallstreetbets moderation issue.

I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered, with the exception of legal porn on social media apps. It's just the way that anything and everything gains enough traction to become relevant among the masses, and they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images right next to pictures of puppies with 0 algorithmic filtering or sorting.

0: https://9to5mac.com/2022/01/11/tumblr-for-ios-updated-with-s...

1: https://www.pcmag.com/news/tumblr-explains-why-it-still-bans...


> Is that really the catch?

Yes, users don't know about this. That is clear from the above quotes.

> Reddit didn't get to where they are today by saying "we totally allow free speech".

I don't know whether shadow moderation was necessary for Reddit to grow. They certainly don't inform users about it. That's manifested by the secrecy inherent in the feature itself.

> I think people understand that, especially when they're using an iPhone[0,1], everything they see is going to be filtered

People expect that authors are informed when their content is removed. That isn't happening with any consistency on any of the platforms, and it's built into the system. It is not a choice made by mods.

> they'll know if they're on a "true free speech" platform as soon as the platform shows then pro-nazi images

That's just the result of sidelining today's unpopular extreme. At both 60 and 120 years ago, information about gay marriage or contraception was considered to be immoral by those in power. It depends who's in charge. The environment will flip at some point, so don't give up your free speech principles. The shoe will eventually be on the other foot.


Yes, yes, but extremists don't want to co-exist on a level playing field overseen by Uncle Sam, or any other Uncle. They want be the Uncle and wipe out serious dissent. To them, the tool is not the problem. The other side having the tool is the problem.


> Yes, yes, but

The refrain "I believe in free speech, BUT" is so common that first amendment lawyers regularly joke about it [0] [1] [2].

"Free speech for me— but not for thee" is also the title of an excellent book by Nat Hentoff [3]

Personally I like how ACLU founder Roger Baldwin put it to historian Arthur Schlesigner, Jr.:

> Arthur: "What possible reason is there for giving civil liberties to people who will use those civil liberties in order to destroy the civil liberties of all the rest?"

> Roger: "That's a classic argument, you know. That's what they said about the Nazis and the Communists, that if they got into power they'd suppress all the rest of us. Therefore, we'd suppress ‘em first. We're going to use their method before they can use it.

> Well that is contrary to our experience. In a democratic society, if you let them all talk, even those who would deny civil liberties and would overthrow the government, that's the best way to prevent them from doing it." [4]

[0] https://youtu.be/0ZpsiLX9_HY?t=419

[1] https://youtu.be/J1iZffRFs8s?t=736

[2] https://youtu.be/-ByRjHwknbc?t=1210

[3] https://archive.org/details/freespeechformeb0000hent/page/n9...

[4] https://youtu.be/ND_uY_KXGgY?t=1225


> Well that is contrary to our experience. In a democratic society, if you let them all talk, even those who would deny civil liberties and would overthrow the government, that's the best way to prevent them from doing it.

This isn't empirically true though, as someone else mentions in this thread there's a fair amount of evidence that past a certain point, more free speech makes fomenting a populist revolt to end democracy easier, not harder, and that if democratic stability is what you're opting for, something more akin to western Europe's laws would be better than the US's.

Like, as much as I like the ACLU, the argument you're presenting is functionally "trust us", which, no, I don't trust the free speech advocacy org to have an objective view on the dangers of unrestricted speech. It is contrary to your nature.


> This isn't empirically true though, as someone else mentions in this thread there's a fair amount of evidence that past a certain point, more free speech makes fomenting a populist revolt to end democracy easier, not harder, and that if democratic stability is what you're opting for, something more akin to western Europe's laws would be better than the US's.

Since you must have already read my above comment about punishable speech that "in context, directly causes specific imminent serious harm", you must now be arguing that there are words beyond that which cause violence. That's wrong. Words are not violence. When you punish speech that does not cause violence, you turn into the censor. Consider how you sound when you argue this point [1]. It draws people towards the cause you seek to censor, not away from it. You enable your opponent, just as Anthony Comstock did over a hundred years ago. People root for the underdog, they want to hear both sides, and to be trusted to make up their own minds. When you take away their ability to choose, you become their enemy, not their ally.

> the argument you're presenting is functionally "trust us",

Fundamentally, this is the exact opposite of the point I'm making. The censor is saying, "trust me, this was bad, seeing it will hurt you". Free speech advocates are saying, "make up your own mind" and that you have the right to associate with whomever you please.

> which, no, I don't trust the free speech advocacy org to have an objective view on the dangers of unrestricted speech.

Free speech does not mean unrestricted speech, as mentioned above.

> It is contrary to your nature.

It sounds like you feel there is no common ground to be found. Yet we are all human. We have both similarities and differences that can be discovered through conversation, some of which may be easy like this one, some of which may be hard. Giving up on that is the equivalent to saying we should either all duke it out or live separately. I'm probably not the most difficult person debate-wise you'll ever meet. You might as well start somewhere.

[1] https://www.youtube.com/watch?v=E0T9XSG73kY&t=4889s


> Since you must have already read my above comment about punishable speech that "in context, directly causes specific imminent serious harm", you must now be arguing that there are words beyond that which cause violence.

Correct.

> Words are not violence.

Sure. But I never claimed as such. I said words could cause violence. Combustion is not flight, and yet combustion, in the right circumstances, causes flight. Two things need not be equivalent for there to be a causal relationship between them.

> you turn into the censor.

Assuredly, but here you're begging the question. Why is becoming the censor an ill that must, at all costs, be avoided?

> It draws people towards the cause you seek to censor, not away from it.

This is a common claim, but we know it to be untrue. While certainly, some people may be drawn to the restricted section of the library, fewer people will ultimately here the material than if it's being preached about on the street. We see this with censorship all the time, both empirical examples (censorship of sex-related topics in religious communities) and quantifiable ones [0].

> Fundamentally, this is the exact opposite of the point I'm making

No, I mean that in the quote you provided, his claim that "in our experience, actually censoring makes things work" is totally unsupported. You have to trust him that that's true, and of course the guy who runs the free speech organization is going to say that censorship is bad. If he didn't believe that, he wouldn't run the free speech organization. My simple question is "what if there's a point after which free speech actually makes things worse"?

I mean you already agree that such a point exists: imminent violent action. But presumably I could come up with other examples of things you'd be okay with censoring (CSAM is a common example). If you're okay banning that speech, what's the harm in moving the needle a bit in one direction or the other, especially if moving it results in a society that is more stable?

I think there are good and nuanced answers to all of those questions, but you aren't even engaging with them because you seem to be claiming that, without exception, it is obvious that the optimal and least harmful choice when picking what speech we should ban is "imminent violence incitement and nothing else", and when I ask "why set the bar there", you quote someone who says "trust me". That's not convincing.

[0]: https://techcrunch.com/2017/09/11/study-finds-reddits-contro...


I think you should stop ignoring the parts you don't accept, you're coming across as dishonest.

Your line of argument ends in tyranny. "We must protect you for your own good, or better, the greater good."

Specifically with the irony that while you decry "trust us," it is precisely what is required to accept your "moving of the needle a little bit."


> I think you should stop ignoring the parts you don't accept, you're coming across as dishonest.

What parts?

> Specifically with the irony that while you decry "trust us," it is precisely what is required to accept your "moving of the needle a little bit."

Not at all because, I have not argued for any particular movement. What I'm asking for is a argument that the status quo is optimal that is more robust than "trust us the alternatives would be worse", especially when there are fairly good examples of the needle being in other places in other nations and it being, generally speaking, fine.


> I have not argued for any particular movement. What I'm asking for is a argument that the status quo is optimal

Why? Nobody said it's written in stone. Laws remain open to interpretation for as long as a judiciary exists.

> there are fairly good examples of the needle being in other places in other nations and it being, generally speaking, fine.

It sounds like you have a particular idea about what should be changed, yet you are reluctant to say which one, and you simultaneously want everyone replying to you to address all of those possibilities ("you aren't even engaging with them").

In other words, you are placing all responsibility upon your interlocutors to anticipate your thinking and none upon yourself.


> It sounds like you have a particular idea about what should be changed

Not at all.

I mean yes I have opinions, but I'm not advocating for any particular opinion here. I'm asking for you to justify yours with something better than the ACLU president having said "trust me". Why is the line where it is in the US better than the line a little to the left or the right of that? What makes the choice to ban the speech we do and allow the speech we do, as opposed to more or less socially optimal?

Fundamentally, I'm not asking you to anticipate my thinking, I'm asking you to catch up and engage with the questions I've already engaged with (some of which I asked upthread!). Because it is unsatisfying that the only answer you can provide to "why should we do things this way" is "trust us, the alternatives would be worse".


> I'm asking for you to justify yours with something better than the ACLU president having said "trust me".

This article is from FIRE, not the ACLU. The ACLU now interprets civil liberties and rights to be potentially in conflict with each other, which is part of why FIRE got started and grew so much.

> I'm not asking you to anticipate my thinking, I'm asking you to catch up and engage with the questions I've already engaged with (some of which I asked upthread!)

The questions you asked were already answered before you asked them. You did not absorb the discussion above before you started asking questions, and are now asking to be spoon fed answers. Take some time to read the other comments above in this chain, check out the sources, listen to FIRE speakers on why current jurisprudence is where it is. Nobody is saying "trust us", they're saying, "this is what I think, and here are the sources that inform my thinking. You can check them out and decide for yourself whether you agree or disagree."

You've ignored all of this and insist that someone must tell you why they are right. Nobody can decide for you what's right. Opinions are subjective. Make up your own mind, nobody can do it for you.


One example, where it was pointed out there are already limitations on free speech quite strictly defined under law.

Your second point is impossible to argue, because you request that someone argue against a subjective and infinitely definable 'optimal' that you projected.


> I don't trust the free speech advocacy org to have an objective view on the dangers of unrestricted speech.

Where is the evidence of this?


> Saying this does not convince people who are concerned about hate speech. Such individuals imagine that the government or the platform will, or can be coerced, to simply remove whatever they do not like. These folks, who consider themselves among the majority, never imagine that they will one day be a minority, or that such power will then be used against them.

Exactly. And it isn't even that difficult to imagine how it could happen: TERFs, or trans-exclusionary radical feminists, want to shut out trans voices, and they do this by accusing trans people of engaging in misogynistic hate speech whenever they speak out about trans issues. How sincere the TERFs are doesn't matter, ultimately, as the result is the same: Trans people being shut up and shut out in the name of keeping misogyny off the platform, or even out of the country. In the words of Vox:

https://www.vox.com/identities/2019/9/5/20840101/terfs-radic...

> TERF ideology has become the de facto face of feminism in the UK, helped along by media leadership from Rupert Murdoch and the Times of London. Any vague opposition to gender-critical thought in the UK brings along accusations of “silencing women” and a splashy feature or op-ed in a British national newspaper. Australian radical feminist Sheila Jeffreys went before the UK Parliament in March 2018 and declared that trans women are “parasites,” language that sounds an awful lot like Trump speaking about immigrants.

The "Defend Women" line is a perfect tactic for TERFs to use, and they use it early and often.

Again, claiming those people are cynically using feminism to fight a different battle is pointless: The fact they could use hate speech laws to silence a minority is the troubling thing.


> The "Defend Women" line is a perfect tactic for TERFs to use, and they use it early and often.

Yeah, it's wild. Meghan Murphy [1] has a lot of guests on who discuss this topic. Some of it is pretty eye opening because the issue is only briefly touched upon via established media, aside from right wing sources. I guess that just provides opportunity for independent journalists to grow by telling such stories. I never believed that other media would turn away stories based on ideology until I heard stories from child transitioners who later detransitioned. And I only got plugged into that by randomly reading commentary about Cloudflare blocking Kiwi Farms [2]. Anyway, it turns out that institutions, made up of humans, display human-like flaws. Who knew?

> The fact they could use hate speech laws to silence a minority is the troubling thing.

It is and it isn't. I think society largely agrees that, at least, most words are not violence. And while we do dip into pretty heavy censorship territory once in awhile, and it does breed a lot of distrust and hate, it also garners more support for free speech. It's clear to more and more every day that censorship, particularly the secretive censorship that pervades social media and IMO is the biggest issue which we should therefore tackle, does not work. Disinfo experts have been promising for years that just one more round of censorship and elections will finally, once and for all, kill off all hatred and discontent, just like people who claim AGI is just around the corner. It hasn't happened, it's not happening, and it's time to go back to our roots, free speech and counter speech.

[1] https://www.youtube.com/watch?v=I0OLsb2TJyQ

[2] https://corinnacohn.substack.com/p/the-world-should-not-need...


> Australian radical feminist Sheila Jeffreys went before the UK Parliament in March 2018 and declared that trans women are "parasites," language that sounds an awful lot like Trump speaking about immigrants.

What she actually said was this:

"When men claim to be women [...] and parasitically occupy the bodies of the oppressed, they speak for the oppressed. They come to be recognised as the oppressed. There's no space for women's liberation."

She is using this metaphor to point out that once men declare themselves to be and are accepted as women, they benefit from all the rights that generations of women have fought for - all while curtailing the means for women to speak about themselves, their lives, and their concerns as women.

Which, as I'm sure you'll agree, is a reasonable feminist viewpoint.


Thank you for the links! #6 was thought-provoking


Yeah Jonathan Rauch is great, he talks a lot about free speech. See also:

https://www.youtube.com/watch?v=E0T9XSG73kY&t=1774s

I wish del.icio.us still existed. I just want to put links in folders and share them.


Porn’s sometimes less than legal if children are able/likely to view it, no? Your spam argument is spot-on though. Someone might argue that posting cooldowns paired with URL filters and a well-tuned voting system fairly and effectively combat spam though.

Might have a better argument with gore instead of porn.


I think it is reasonable to assume that for the purposes of free speech, we’re talking about people who are of age to engage in the discussion.

“What is your name?” can be illegal speech when directed to a 12 year old on the internet in the US.


The content of spam is usually illegal since they often promote scams, frauds, and often goes against local advertisement laws.


That's not the case at all.


> The popular radio program “On the Media” feared Musk’s support for free speech would lead to a free-for-all environment rife with child pornography. But that’s a strawman: child pornography is illegal.

This is itself a strawman.

No one thinks Musk will permit it on Twitter. The gutting of the moderation teams who tackle it is the concern. An underenforced rule is often not a very effective one.


All online platforms need to have moderators to prevent things like spam and illegal content. That doesn't mean that they can't also uphold the idea of free speech principles though.

Musk really went crazy with cutting staff, but I'm not sure if it was because he wrongly thought he needed to cut the moderation teams in order to support free speech or if he just did it because moderation is expensive.


Why is spam always glossed over in these "principled defenses" of free speech? It's legal speech.


I love me some free speech, but I don't think I've ever met a true free speech absolutist.

Most of are just fine with limiting speech when it's for the public good. We don't want companies to be able to lie about their products, so we support laws against false advertising. We're okay with people getting arrested for making bomb threats. We even approve of compelling speech when it means forcing companies by law to label the ingredients of their products!

Left unchecked spam makes the services we love unusable. It can prevent us from having those same discussions we flock to social media platforms to try to engage in.

Social media platforms should provide a constructive environment where ideas can be freely discussed and that means moderation is necessary for things like keeping out spam, keeping discussions on topic, and even filtering out trolls.

Content moderation is essential to a healthy forum, but obviously care has be taken to maintain a balance between restricting and allowing content guided by what would best facilitate the constructive discussion of ideas, and not based on personal feelings about the ideas themselves. It's not an easy task, and it's thankless work, but if it wasn't for moderation none of us would be here talking about how spam should or shouldn't be allowed to ruin everything.


> Most of are just fine with limiting speech when it's for the public good.

I'd edit this. "Most are just fine with limiting speech when it isn't them." The "public good" to, say, Andy Ngo is going to involve throwing a whole lot of leftists in prison.

This whole thing is just about having the guy you like be the one making the decisions.


For the public good? You mean “aligns with what they think is for the public good and their own views” - there is no such thing as free speech as there is always limits, and really it is just where the lines are drawn.


Because it's not very interesting because nobody has a good faith reason to argue in favor allowing SPAM. I assume it's being used to demonstrate the arbitrary nature of what speech people would want to allow, but it's a bit pointless. SPAM falls in the category of things that arguably are simply abuse of the system, using it in an unintended way. SPAM is not discourse, or an ideology, or a belief. SPAM is a pattern of behaviors. The general idea is simple: in principle, expressions that are legal should not be disallowed on moral grounds, but disruptive patterns of behavior absolutely should be.


Some people like to post a word or a short sentence or some kind of artwork many, many times, everywhere they can. They enjoy knowing their message is seen by many. For them, it's not pointless.

At what point does it become spam and we restrict what these people view as free speech?


I hate that I am answering this, as if it's really a question posed in good faith. But again, the problem isn't the expression. The word or artwork isn't the problem. The problem is the disruptive behavior pattern.


Is a whole bunch of people calling somebody a "fa***t" in DMs discourse? Lots of people have told me that this isn't illegal so Twitter should do nothing.


This is a good example of a question posed in bad faith. I shouldn't have to reply, but unfortunately in this situation if I don't address it, it looks like I'm simply unable to, which is not the case.

I have a stunning revelation to make:

I am not in favor of allowing all legal things on the principle that because they're legal, they should be allowed. In fact, given that the thing you're replying to is arguing in favor of moderating SPAM, I figured it would be apparent. Apparently, it is not.

Furthermore... Like SPAM, harassing someone in the DMs by simply calling them homophobic or racial slurs is not "free expression." It's harassment. Harassment, libel, etc. are not things that everyone (hopefully not most) who believe in free expression as a principle are trying to defend.

I'm mostly not going to address the "lots of people have told me" and disregard it, since it's probably made up for the bait. But if not, I mean... Good for all of those lunatics, I guess. I'm not associated with them, and I don't like Twitter or any social media platform to begin with.


Is a whole bunch of people calling someone a Nazi any different? It's a signal of social disapproval, even if it's distasteful.

I think you can meaningfully distinguish individuals expressing distasteful speech from coordinated campaigns and harassment, and spam falls under the latter.


We can have a discussion about what sort of speech is unacceptable on a social media platform. My concern is largely with the people who insist that we shouldn't even have the conversation about insulting messages because of free speech absolutism or whatever.

The "oh but lefties are mean too" argument immediately retreats from the idea of free speech absolutism. I'm down for that.


> My concern is largely with the people who insist that we shouldn't even have the conversation about insulting messages because of free speech absolutism or whatever.

That's not where I would draw the line against free speech absolutism. Insults or rudeness from individuals should be permitted, insults from groups/coordinated campaigns is where I would draw that line because that starts crossing into inciting a mob. Mobs of any political persuasion are undesirable.

Uncoordinated-coordination seems like an emergent phenomenon of social media though, which is why this is a tough issue. It's almost like we need some kind of back pressure against virality to keep mobs in check.


It's more that the SPAM issue shows that "speech people" don't have an actual working methodology of distinguishing 'good' speech from 'bad' speech any more than the very people they complain of being censorious.


Even "free speech absolutists" are not actually absolutists, because almost all of them have the asterisk of "within the law." The law is quite arbitrary too, with my most favorite bit of nonsense being obscenity law. The point isn't that you should allow everything all the time, it's that in general banishing ideas or expressions because they're immoral sucks, and I don't like it out of principle. There are, of course, other reasons why we limit expression, and some of those are more reasonable in nature, even if it's not always a good idea.

That all having been said... SPAM and harassment are not problems because of the expression itself, they're problems because of the disruptive patterns of behavior. The point is not that you can't say something, or have a given opinion, or etc.

I'm not really sure how this came to be everyone's ultimate catch 22 on free expression when there's more obvious caveats, such as how arbitrary the law is. But as arbitrary as the law is, it's like gofmt. Nobody's favorite, but everybody's favorite. (This is possibly one of the worst HN analogies this month, which now that I think about it, should probably be a thing someone tracks.)


"That all having been said... SPAM and harassment are not problems because of the expression itself, they're problems because of the disruptive patterns of behavior. The point is not that you can't say something, or have a given opinion, or etc."

Okay, but to ban speech on this is once again to pass judgment on its value to the contribution of the discourse or whatever avenue for communication is at issue. That's why the absolutist position is ridiculous. It isn't navigable from any perspective save for the very perspective they are already criticizing.

Also, I'm not sure why you chose the word arbitrary. That's not what arbitrary means. Obscenity laws aren't arbitrary at all, they are based in specific judgments related to a community's perception of what is and isn't acceptable. I'm not saying obscenity laws are good or especially well-reasoned, but they are clearly not arbitrary. Perhaps you meant subjective/un-objective?


What counts as "obscene" sure feels arbitrary, but fine. Subjective.

> Okay, but to ban speech on this is once again to pass judgment on its value to the contribution of the discourse or whatever avenue for communication is at issue. That's why the absolutist position is ridiculous. It isn't navigable from any perspective save for the very perspective they are already criticizing.

The key point that I've been failing to convey effectively is very simple: with SPAM, the expression itself is not the problem. If you post it 10,000 times responding to unrelated people, that is a problem.

(I realize that commercial SPAM is possibly what you are referring to here but... That sort of SPAM is more or less permitted on social media, so it's kind of neither here nor there.)

This generally follows: if you DM someone to yell racial slurs at them, you are harassing them. It's not about the platform banning naughty words, it's about banning disruptive conduct. The conduct is about the behavior, not the ideas or expressions expressed in them.

The "absolutist" position is basically never actually "absolutist". I initially thought people were interpreting it literally as a joke or something, but it seems like it has been taken pretty seriously. Yet, there are exceedingly few people who think that unprotected speech like CSAM should just be allowed. They DO exist, but I have a feeling the speech absolutists you are referring to do not. Doesn't that already make this discussion moot?


How is obscenity arbitrary? What's considered obscene is related to what's considered not acceptable in society... you are acting like people arbitrarily decided that ducks are obscene...

>The key point that I've been failing to convey effectively is very simple: with SPAM, the expression itself is not the problem. If you post it 10,000 times responding to unrelated people, that is a problem.

You have conveyed that but it is not a useful metric by which to filter things from an absolutist standpoint because you have to make a value-judgment on the worth of the speech in regards to the venue... exactly what I said before.

>This generally follows: if you DM someone to yell racial slurs at them, you are harassing them. It's not about the platform banning naughty words, it's about banning disruptive conduct. The conduct is about the behavior, not the ideas or expressions expressed in them.

It's not that its disruptive... its that its harassment which is already a civil action and likely criminal in your jurisdiction as well. If you are gonna talk about how arbitrary laws are... maybe know a law or two?

>The "absolutist" position is basically never actually "absolutist". I initially thought people were interpreting it literally as a joke or something, but it seems like it has been taken pretty seriously. Yet, there are exceedingly few people who think that unprotected speech like CSAM should just be allowed. They DO exist, but I have a feeling the speech absolutists you are referring to do not. Doesn't that already make this discussion moot?

I'm not Elon Musk saying Im buying Twitter in order to support free speech... so don't look at me! I don't have problems with content moderation because I'm not naive.


I am disengaging at this point. I do genuinely like discussing these things, but I don't think we're speaking the same language.


Which side of that line does neo-Nazis trolling Twitter count as?


Trolling is a pretty vague word, it doesn't really mean anything. So it's hard to really reply to this with any degree of seriousness, if it even was serious to begin with. That said, on properly moderated forums, trolling would usually be moderated, by a human that proactively moderates discussions. Social media can't really do that because the scale of the moderation team they'd need to do it would be literally unthinkable. It's clear that they need huge moderation teams just to upkeep the crappy standard of handing reports inconsistently that they have today. Exactly what to do with that information, I don't know.


Trolling is online speech used to deliberately upset others. That's it. That's what it means. It has a clear definition.

You might have wanted to say that no one can agree on what speech is trolling and what speech isn't, but that's not because the word is vague. It's because people disagree on the deliberate and the upset part.


It may be clear but it is also broad. The law is more narrow: there are forms of trolling that are illegal speech and there are some that are legal.

There may be a good moral argument to draw that line differently when the consequence is a ban from a commercial platform vs being locked in a prison.


I wouldn't define it as limited to online speech. And most trolling I'm familiar with is calling out asshats and holier than thou types in sarcastic and funny manner.

No definition is as concrete, unmallable, and unchanging as you seem to imagine.


No, I didn't mean to say that. Case in point... Even that definition is vague.


Can you elaborate? It seems definitionally straight forward to me, so I'm not sure what I'm missing.


"online speech used to deliberately upset others" does not actually distinguish trolling versus other malicious behaviors, such as just being rude and flippant on purpose, so it is not very precise. It also doesn't really describe the actual kinds of behaviors that trolls engage in, but rather constitutes a class of behaviors that are not necessarily obviously connected, so it's vague. There's a lot of different ways that people troll, and different kinds of trolling, and I don't feel like that definition really summarizes it. For example, the term "concern trolling" is generally included in the umbrella, but it's actually more subtle than just being used to upset others; it's subversive, but the goal isn't necessarily just to upset others.

Truthfully, the two observations are related: The word "trolling" being kind of vague is probably the main reason why people do not agree on what actually constitutes it.


from this thread, it sounds like you're saying spam has a clear, precise definition but trolling does not.

what then, in your mind, is the clear, precise definition of spam?


I'm a little frustrated here, because I never attempted to imply that SPAM is easier to define than trolling. If that's somehow something people are legitimately reading out of my replies, then I must've messed up somewhere. All I have been trying to suggest is that nobody has made a good faith argument in favor of allowing SPAM, which is not the case for trolling. But the thing is, while neither have a concise definition, and social media moderation are imperfect at dealing with both situations, it is MUCH easier for a human to distinguish SPAM from trolling. In some types of trolling, the very point of it is that it is difficult to distinguish from a good-faith post; if it wasn't, it would be bad bait. Whereas many SPAM patterns, by nature of being SPAM, are detectable just by looking at posting patterns and not even contents, which is basically the way that social media handles such content. You can't do that for human trolls, because human trolls don't look much different from a high level as other users do, especially depending on what kind of troll you're dealing with.

I am kind of surprised at how many different ways people have interpreted what I said. I'm frankly feeling a little defeated.


"such as just being rude and flippant on purpose"

How is that not trolling?


Trolling almost always involves fishing for a reaction by acting dishonestly or misleading. When you are simply rude and flippant on purpose, that's just being an asshole. If someone is rude to me, I don't say "ah, I just got trolled."


It sounds relatively straight forward to make it "Deceptive speech used to deliberately upset others or undermine discussion." Is that accurate and precise enough to cover your conceptualization of trolling?


You don't think that being purposefully rude is to solicit a reaction? I guess we'll have to disagree about that.


Yeah, I think it's to solicit a reaction 100%, but it's not subversive at all. Their intentions are clear. The "coaxed into a snafu" meme hints at the nature of what makes trolling unique versus just flaming. Reading back the last part of that sentence has teleported me back in time about 20 years.


It's definitely subversive, as it's subverting communications norms by being rude in the first place.


I mean in a different sense, in the sense that it is insincere. That's the problem with trolling right there. If you're sincere, it's obviously not trolling.

Argueing that something is trolling because it solicits a reaction, or that because it's disruptive it counts as trolling, doesn't make sense. You can't distinguish trolling without knowing someone's motivations. Posts that could be trolling could just as easily be venting, or bringing up a genuine concern that just happens to be contentious, or etc.

Otherwise, flaming people in general is obviously trolling. That's not the way the word trolling has been used historically.


It's not insincere. Perhaps you mean unkind?


I just looked and plenty of definitions of trolling seem to invoke the same idea of insincerity.


How does one tell which ones are trolling and which ones are sincerely expressing their beliefs?


it is or can be. It's a weird case though because you could theoretically drown out everyone else's speech with it. Like imagine if there were no spam inboxes or spam restrictions on SMTP or email inboxes and for every one good email you got, there were 10K spam emails about enlargement pills. It would make email much less useful and maybe unusable.

I have thought about this before and I get kind of annoyed when people assume spam is a given exception to free speech. Free speech by itself doesn't imply a limit. So I think you have to carefully design around this issue or just let it happen and maybe it dies off on its own since too much spam chases everyone away and then what's the point of spamming?

edit: drowning out other people's speech may be allowed by total free speech but it's also contrary to the intention I think. I think free speech means that any speech someone wishes to make, they may do so without restriction or censorship by the medium. This isn't a completely satisfying definition though.


That's because you can spam anonymously. If email senders had to be securely identified, as in China, spam would not be a problem. Spam, and a ton of CAN-SPAM lawsuits land on you.

Most of the problems that seem to justify prior restraint come from poor source identification. If it's illegal, deal with it through the legal system, after the fact.

Facebook may have had the right idea with their "real names" policy, if they'd stuck to it and required strong authentication.


Email could still be useable in that case using personal whitelists and shared whitelists.


Imagine for a moment that the volume of spam is compared to the volume of oration. Unlimited volume is not permitted in free speech. That is a disruption to public life.

There is no reason to assume that this theory would fail to be legitimized in courts. Spam is not legal in other arenas, i.e. phone calls & texts.

The fact that the operators of spam bots are difficult to prosecute does not mean what they are doing has legal grounds.


Spam restrictions aren't generally applied by the government, and therefore don't fall under the constitution. The law doesn't require anyone to listen to someone else's speech. It is not a violation of anyone's rights to discard their emails unread using an automated filter.


I can't make sense of your statement.

Can clarify what it is this means?


Hmmm. I thought it was straightforward. I'll unpack it:

> Spam restrictions aren't generally applied by the government, and therefore don't fall under the constitution.

The "free speech" constitutional amendment stipulates that the government can't restrict speech. It doesn't apply to a mail service provider, which is free to reject whatever it likes.

> The law doesn't require anyone to listen to someone else's speech.

Your freedom to speak to me ends when I decide I don't want to listen to you. I have a right to not listen, and I have a right to reject spam.

> It is not a violation of anyone's rights to discard their emails unread using an automated filter.

I don't know how to say that more clearly; using a spam filter doesn't violate the US constitution. Email would be unuseable without spam filters.


Free speech is about allowing any idea to be expressed. Spam is not an idea, but a way of expressing something, so it's not inherently against free speech to restrict it.


Yet many will complain that what they call "hurt feelings" are irrelevant regarding whether or not their statements were OK and should be welcome. What separates a "way of expressing" from a true "idea"? "Abuse" is not an idea, but a way of expressing something? "You suck" is something of an "idea," but "I hope you die" or "watch out for your kids" if it's not a literally true threat? Not so much... And if it IS a true threat? For that matter... "mockery" is not an idea... "hateful" is not an idea... "insulting" is not an idea? Seems like you can take that definitional dodge to lengths to allow just about any sort of moderation.

(Any given post of spam, basically just unsolicited peer-to-peer advertising, seems evidently an idea, IMO - it's very much something you are supposed to believe and act on. If it's a volume/repetition/thoughtlessness thing, how's that different from a wave of trolling posts?)


These are all heavily subjective and often used as a code for certain opinions. In current political discourse, “hateful” usually means “disagrees with the woke ideology”, so a statement like “kill all white people” would not be considered hateful, whereas “children under 18 shouln't be allowed to take unnecessary cosmetic surgery” would.


You carefully avoid saying that "spam" isn't subjective, but if you think it is, your reply wouldn't be relevant, so: I doubt you'll have any luck trying to "objectively" establish a criteria for spam classification that nobody disagrees with.

Or what about the threat examples, since you claim all those categories were subjective? You didn't engage with that one, or many of the others. "I know your kids go to [specific school], watch out"? Subjective? "I'm going to kill you and your family?" Subjective or straight up threat? How about mockery or insults? "Your post is stupid and you are stupid?" Is it subjective that that's insulting? So is it an "idea" or just a particular way of saying I disagree with you? "You are clearly politically motivated and not arguing in good faith?" Maybe that one is actually an idea!


> You carefully avoid saying that "spam" isn't subjective, but if you think it is, your reply wouldn't be relevant, so: I doubt you'll have any luck trying to "objectively" establish a criteria for spam classification that nobody disagrees with.

Alright. Spam is when you post something to a space where it's not relevant (not really applicable to Twitter since it doesn't have topical spaces), or post something repeatedly without a good reason.

> "I know your kids go to [specific school], watch out"? Subjective? "I'm going to kill you and your family?" Subjective or straight up threat?

Those are specific threats, which are not protected as free speech.

> "Your post is stupid and you are stupid?"

Nothing wrong with saying that, aside from the fact that it's not constructive and you'd be better off explaining why you disagree.

> "You are clearly politically motivated and not arguing in good faith?"

Same as previous.


"I know your kids go to [specific school], watch out" isn't a specific threat.


How not? It expresses intention to harm a specific individual at a specific place if a certain implicit condition is met.


Is "shut the fuck up, fa***t" an idea? That's precisely the sort of thing that I see people say shouldn't lead to bans on Twitter.

This is very clearly not just about "expressing ideas." There is very real behavior that is designed entirely to hurt other people and is 100% legal that is at the center of this discussion of online moderation.


> Is "shut the fuck up, fa**t" an idea? That's precisely the sort of thing that I see people say shouldn't lead to bans on Twitter.

It's not an idea, though I don't see a particular reason to ban it, unless you have a platform like Saidit that generally encourages constructive discussion over baseless insults (which I consider a great idea).


How does this jive with the claim that free speech is about expressing ideas?


I'm not saying it goes against free speech to ban this particular sentence. It's just hypocritical if you don't also ban other insults.


A kind of motte-and-bailey, I guess.


What desire does a platform like Twitter have for spam? We have laws that prevent people from uttering threats. Twitter has rules against spam.

It's their platform and they want their users to be free in their discourse, they just don't want spam.


It's fair to assume he cut moderation teams because they were intimately involved with suppression of speech at the behest of the government and political interests.

Determining those moderators which have been witting participants in the chilling of free speech is more difficult than axing a great many of them that underperformed and keeping the few with real tangible contribution.


Being “intimately involved with suppression of speech at the behest of the government and political interests” exactly describes the job of removing child sexual assault content.


Are you familiar with logical fallacy?

Yes, bad things are on the internet.

That is in no way the content or information I'm describing, nor does it fall into the category of legal free speech.

Bringing up illegal things to argument the suppression of legal speech is, I don't know, moronic.

You are perpetuating a tired trope and I haven't the energy to persist with this discussion


https://twitter.com/elizableu/status/1599484564832854017?cxt...

This is where you get bent.

Your logical fallacy is detailed here:

https://en.m.wikipedia.org/wiki/Appeal_to_emotion

I hope that, despite what you are implying, this success by Musk is not upsetting to you.


A better word is partisan or authoritarian interests.


Hey, it begs the question, how many authoritarian and partisan tactics can one utilize before being considered an partisan authoritarian?


Why do you think that is “fair to assume?” To me, it sounds like wild conjecture.


Musk has stated that Twitter interfered with elections. I'm not sure what evidence he's provided for that, but he did say it.


he has said a lot of things that are wildly inaccurate.


Given his usual proclamations, (e.g. level-5 self driving) I think it is fair to assume that he is not very high on evidence.


And Trump still hasn't conceded 2020... who cares what people say when it's nonsense?


The groups did a fair amount of censoring and controlling political information.

He bought Twitter to stop that from happening.

So, it's safe to assume, in my opinion, that he fired a bunch of people that care more about their ideology than they care about fair or honest debate or acting in accordance with the principals of freedom or the United States


You dont need to assume. Twitter has openly stated their political and ideological values that drive moderation.


Where?


If I had to guess what is in Musk mind, I would bet that he thought he wouldn't go anywhere with an organisation full of employees who actively hate and resist him, so this was a way to shake the tree hard, to have a chance of a fresh start. Doesn't mean a lot of those teams won't be restaffed, just not restaffed with the same political activists.


This is most agreeable take to me, Musk's organization style is a strict, recursive hierarchy, with a unitary and grandiose vision divided and conquered by VPs and managers. He cannot have a distributed self-organizing collective like typical Web services company is, and must be simply cutting down employees to the minimum.


Yeah I'm not sure how anyone could be surprised. Its completely consistent with his explicitly stated plan and goals.


I agree. Musk purchased a company staffed by people that believe they have a duty to silence opposing views. There is simply no good reason to keep staff that are actively opposed to the new owner's goals. It was either gut the company and take the hit now or battle for years with staff that sabotage your goals.


> Musk really went crazy with cutting staff, but I'm not sure if it was because he wrongly thought he needed to cut the moderation teams in order to support free speech or if he just did it because moderation is expensive.

It's largely cost-cutting for sure, but I also think he believes he needs to start over in a lot of these departments, the entrenched beliefs of how things should be moderated was undoubtedly very deep. I suppose we'll see!


All centralized platforms that want to control the content, but this is not actually a requirement of an online platform.

Spam in e-mail is a problem because spam consists of private messages sent to individuals, but public messages can be quickly categorized by a community, and ones own characterization of a message for the purpose of sorting messages for display can be based on a how whether it's chosen for retransission by people who who one oneself has a high trust in.

Imagine having a feed which is initially a horrible spam-filled mess, and then as you encounter things it you like, you upvote that, until your feed is resorted to include mostly quite interesting things. When you again find spam or uninteresting content you reduce the weight of the people who retransmit it.

Moderation always risks being political manipulation, and it's probably too dangerous to democracy to allow it. It's certainly too dangerous to accept foreign moderation (I'm from Europe).


To really drive home the point, simply consider that pre-Musk Twitter did not cite free speech as a core value.


Unless you're splitting hairs and trying to say "free speech" and "free expression" are fundamentally different, they absolutely did.

July, 2021: https://archive.ph/VlpYI

> Defending and respecting the user’s voice is one of our core values at Twitter. This value is a two-part commitment to freedom of expression and privacy.

> This is a global commitment, and while grounded in the United States Bill of Rights and the European Convention on Human Rights, it is informed by a number of additional sources including the members of our Trust and Safety Council, relationships with advocates and activists around the globe, and by works such as United Nations Principles on Business and Human Rights.


I think it's a little late to still believe Musk is a fan of the principle of free speech, after events such as his little pissbaby tantrum when people started changing their Twitter name to Elon Musk and their avatar to that photo of him balding.


Impersonation being banned is a bad example. A better one is Musk refusing to let Alex Jones back on, which is what lead to the head of trust and safety resigning because he realized this was just Musk making up rules as he went along.


This is the quote they're referencing:

NATALIE WYNN (the mind behind Control Points, a left wing YouTube channel) I do think that looking at 8Chan is a pretty good case study in what happens when you create a "okay, let's just let people say anything." People are posting child pornography to this website on a fairly frequent basis.

I think you're right. "On the media" was just talking about how having an "anything goes" policy leads to a place where nobody wants to hang out, where people post illegal stuff even though its technically not allowed. Which I think is valid when Musk has previously said Twitter should allow anything legal.


For reference, it's ContraPoints.


It's been grimly entertaining to watch the Musk/Kanye cycle from inviting him back as a symbol of free speech to discovering that Kanye interpreted that as antisemitism was green lighted now, to banning him again.

I guess that's the context for today's free speech discussion.


> No one thinks Musk will permit it on Twitter. The gutting of the moderation teams who tackle it is the concern. An underenforced rule is often not a very effective one.

This isnt cause for concern because they've already caught some longstanding CPU. They are doing a better job now.


I mean, they are actually not doing better job. Nothing like that was demonstrated at all.


[flagged]


Again, it’s not about whether or not he wants the stuff on Twitter. If you fire all the staff who find it, handle user reports of it, and remove it, the end result is the same.


It doesn't matter how many flunkies you have on staff. What matters is whether the stuff is effectively caught and removed (and probably reported to the authorities).

Time will tell on this, but currently Musk looks a lot _more_ credible on this than Twitter 1.0. They seemed to have been a lot more focused on repressing wrong-think than dealing with actual criminal behavior. And this isn't just abstract--they helped ruin countless lives.


Do you have data that shows Musk is taking down more CP or similarly vile illegal content than "Twitter 1.0?" Or are you seeing the news that he took down a few noteworthy hashtags that had previously been ignored and using that as broader evidence?


> tolerate child pornography (unlike the prior owners of Twitter)

Twitter previously tolerated this? That’s the first I hear of that. Do you have more info?


> (or who's old enough to remember Polanski)

That's not a resolved matter by the way. He's still alive, still a convicted rapist, still a fugitive, and still defended and respected by Hollywood-centric media. He continues to evidently be immune to 'cancellation' because... influential people in movie industry like his movies I guess.


That was a product of a bubble of 1970s liberalism that was open minded towards sex between adults and young teenagers. It was an intersection of the 1960s sexual revolution, and the 1960s trend of treating kids more like adults (e.g. Tinker v. Des Moines)—but prior to the #MeToo era focus on affirmative consent and power dynamics. They simply couldn’t understand the backlash against Polanski.


Are you implying Balenciaga is .. trafficking children or making child porn or something? Not sure what you mean by "the person we should be worried about."

My understanding is they made an offensive photoshoot using children but I wasn't aware of any sexual abuse allegations. I'm aware of the court opinion on the blanket and the sexually suggestive stuffed animals. It's inflammatory, sure, but there's definitely a case that it's artistic as well.


Replying to my own comment to say that I've spent my evening reading up on the controversy, looking at the ads, reading the SCOTUS decision, studying Borremans.

I'm pretty sure this is just a more digestible, mainstream version of the Wayfair human trafficking conspiracy theory.


No abuse that I've heard of. But very offensive. Look further (not at work). It's too gross to describe on HN.

I'm pretty strong on free speech myself. But if you want to repress awful stuff, Musk is not the place to start. And the narrative that he's somehow lowering the level of discourse on Twitter is absurd.


Going further, it feels like the ad campaign did contain some social commentary about the sex-posi, BDSM-posi, generally more adult world children have to navigate. The fact that people find it uncomfortable feels like the point.

But as a society, can art and images only be taken literally and autobiographically now? That feels like old Christian ways when Jesus could not be depicted (or at least the art history part of my brain thinks of that)


I've seen it. It's a child with one of their stuffed animal purses in BDSM gear. And there's some wine in a shot. And there's text for a SCOTUS ruling that child pornography is not protected speech.

I'm honestly not seeing what's so grotesque about it all? Feels tame as far as fashion stuff goes. Wouldn't bat an eye in the '90s.


You haven't reached the bottom yet. Keep going until you hit Borremans. It's not 4chan, but I'm not listening to these people on the subject of standards and practices.


I looked up Borremans because I'd never heard of him but apparently a book of his work in the photo shoot is some sort pedophilic "code"? Or something like that?

What exactly is the issue with his work? That it depicts nude (but not sexualized as far as Incould find) children? And the children aren't real models afaict either.

I like what I see of his work. Reminds of Francis Bacon. Deeply human art. There's something about that kind of figure work that speaks to the soul.

So yeah, not sure why this artist is "the bottom."


I think it's really creepy that you don't see anything wrong with a child in BDSM gear. And alcohol.

That's actually sadism.

I dunno, maybe you just don't get it. Do you know what sadism is?

On the other hand, I've yet to hear a convincing argument as to why it shouldn't bother me other than "oh just get over it it's not that bad", which isn't working with me, since I have no petty bourgeois sensibilities, and I know what I'm doing when it comes to art.

Why should I be cool with sadism?


I thought the ad was creepy and weird, but you're lying here. There was no " child in BDSM gear" in the photo. Why lie to make a point? That's creepy and weird, too


I made a mistake but everyone knows what picture I'm talking about. It's not a lie.

So it was a child carrying a doll that had BDSM gear. But that's funny cus if the girl was the one in BDSM gear, that would make it wrong? Is that the line?


I'm curious why, if you believed you made a mistake, you didn't edit your comment at the time to avoid propagating misinformation (as the 2 hour edit window was still open when you made this response acknowledging it as a mistake)?

I personally am not familiar with this photo and don't really want to see it, so yes, your comment could have mislead me.


There's no ulterior motive. I just messed up. I fixed it as soon as I got called out on it without objection and I left it there so people can see what the issue was.

If you are going to participate in this discussion, you need to keep up. You're attributing malice where there is none. That means you have to see the picture.

I'm not saying you can't say anything, just that I'm going to dismiss you out of hand.


The child was next to some glasses of wine. Why does that make you clutch pearls exactly?

Have you seen Big Daddy? The Adam Sandler film? Is that also on this level? Feels like the same thing. Child actors in an adult piece of media with sexual and violent and otherwise adult themes.


The thing is I just don't care how you feel. You haven't explained how the kid can consent to this photo shoot or why it isn't sexual abuse.

I want to know if you think it's ok if the kid wears BDSM gear. Yes? No?


Why do you want to know that when you've acknowledged in a parallel thread that that wasn't what happened? It's hard for me to read this any other way than a deflection so that you can discuss a different set of events, which didn't happen, where you feel your arguments would fair better.

Given that we're already talking about a hot-button issue which is the subject of conspiracy theories, that seems dangerous.


Yeah, you're trying to catch me on a typo, but it's not gonna work cus I already acknowledged it.

I do know I hit a nerve with that particular scenario. I got all I wanted. Ok, so let's just move on. Let's get to the fun stuff.

So the line stopped at her wearing the BDSM gear. Why? Why not cross that line? Why is it ok for the bear to wear BDSM gear and for her not to?


I misunderstood. I thought we were having a discussion, but I can see you're looking to deliver a monologue. I'll leave you to it.


Lol. All I did was respond to the Adam Sandler move with my own. I'm just trying to speed this up and get to the good stuff already.

Apparently, if it was the kid wearing the BDSM gear, it wouldn't feel like the same thing. It would feel wrong.

Of course, if you, or anyone reading this feels different, speak out.


For added weirdness “ba len ci aga” is Latin for “do what you want.” See for yourself with Google translate. It could also be translated as the more familiar “do what thou wilt,” a rather infamous occult credo.


The company was founded more than a century ago by a spanish prodigy named Cristobal Balenciaga...stop with the conspirationist nonsense. Not here.


Sadly it is commonplace here (HN). You can find comments on HM spouting Tucker Carlson and co's outrage on a daily basis. I do not envy dang's job.

Moderating dog whistles at-scale is an unsolved problem. Quick, Paul Graham, seize the opportunity to change the world!!


Don't get so worked up, it's just a funny[1] coincidence. Noticing funny patterns is definitely a thing hackers do my friend. It's taking them too seriously that's a problem, so don't do that!

[1] Weird, not haha.


Can't help but think of that Sartre quote about arguing with people in Bad Faith.


You’ll need to do some introspection to find any. I haven’t argued anything at all, and certainly not in bad faith. I simply presented a factual and amusing linguistic coincidence. Whatever conclusions you choose to draw from that fact are your own and tell us only about you.


This is started to smell like Q-adjacent nonsense to me


That's basically what it is


And this gets to the heart of the problem. Twitter users think that because it was parody, the photoshoot was neither child pornography nor sexually abusive. Human beings recognize that it may have been parody but it was still child pornography and sexually abusive. It introduced, nay, immersed children in those scenes and behaviors.


But how was it abusive exactly? Child actors aren't allowed to be photographed in adult contexts anymore? I assume their parent(s) were present and involved and therefore it probably wasn't unsafe. Unless it's abuse by osmosis of their surroundings? Feels like Nathan Fielder's use of child actors was more offensive than this ad campaign and that was fine.


Twitter recently shut down several child exploitation hash tags that had been ignored until Musk took over.


Such as?


Is that really the sort of thing we need on HN?


You can easily find them on google if you're actually interested.


What a hero


At least some activists are saying that since the takeover, the moderation of CSAM has been dramatically improved after complaints had been falling on deaf ears for years. There are complaints that the old moderation team was ignoring clear and well-documented reports.

https://twitter.com/elizableu/status/1566255230374842369

https://twitter.com/elizableu/status/1594137408186073089

What's shocking is that apparently Twitter has had a massive CSAM problem for years and Twitter's crack moderation team apparently did very little about it, not even banning hashtags reported to them repeatedly. And none of this got attention until Musk bought Twitter.

[removed section about addition of new reporting option]

I say this as someone who is not in the slightest a Musk fan.


That thread is debunked in the comments; the CSAM report option predates Musk.

https://twitter.com/ChronicBabak/status/1594762640357982208


Thanks, I will remove that part of the comment. Assuming good faith from someone who appears to be extremely active in this area, perhaps this was an A/B tested feature and she didn't have it.

What I find more concerning about this is the way media attention is used offensively. Clearly CSAM has been a problem on Twitter for some time. But only post-Musk is CSAM on Twitter becoming a focus of attention in the media. Is it increasing or decreasing? We're likely to get only fearmongering articles about the T&S team layoffs.


I've been following her for a few weeks now. I think she is coming at it from a good position but she also seems to be a very big Musk fan, and takes what he says at face value. And many of the replies to her tweets come from very Qanon people.


About 90% of what she is saying is true, but she is not coming from a good place.

Eliza Bleu aka Eliza Morthland (aka Eliza Siep aka Eliza Cuts aka Eliza Knows) is the daughter of MAGA politician Richard Morthland and her "Bleu" trafficking advocate persona is an act. She's a former American Idol contestant, ex-gf of MyChemicalRomance's Gerard Way, associate of child molester Jeffree Star, & fundraising partner of convicted child rapist and conservative spokeswoman Felecia Shareese Killings. Bleu also coordinates with Mike "Who cares about rape? I don't!" Cernovich, one of the pizzagate amplifiers (and the man who got James Gunn fired), and is amplified by QANon rags like The Epoch Times. Bleu began her podcasting career by interviewing and platforming Tara Reade, the former aide of President Biden who was caught fabricating claims of sexual abuse.

She is, to be blunt, an incredibly proficient grifter and propagandist who specializes in weaponizing the topic of sex crimes.

Eliza has been a speaker at various Tesla events for years, and works hand in hand with Teslarati, an Elon Musk propaganda news site. They are the original sources of the initial bogus claim that Musk had moved against CSAM on Twitter.

When Bleu claims Twitter had a massive problem with CSAM and that the former Twitter execs did nothing, she is actually telling the truth. The platform relies too heavily on pornography for user retention, and it didn't/doesn't have the manpower or tech to filter out (at scale) the massive amounts of underaged porn and CSAM that comes with being a major adult content platform. Instead of prioritizing child safety and nuking all the porn on the site like Tumblr was forced to do by Apple, the prior admin opted instead to bury the issue even as it continued to grow into a massive albeit invisible problem. They deserve every criticism and attack Eliza Bleu has lobbed at them, regardless of her own actions.

Now Musk finds himself in the exact same bind. Twitter (still) needs porn to hold the site together, yet it's (still) thoroughly infested by CSAM threat agents. It's a tricky situation, and the wrong move sees Apple giving Twitter the Tumblr treatment.

So Eliza Bleu has been tapped to control the narrative to prevent this by using her survivor persona as a platform to convince everyone that Twitter 2.0 has actually addressed that awful CSAM problem. She's the figurative Iraqi minister hands up before the mics.

Observe her reactions to the Forbes article that cites Carolina Christofoletti (an actual researcher, academic and CSAM expert who is respected in the infosec and OSINT communities). Christofoletti explains that the problem was never addressed at all, that CSAM hashtags are a ridiculous focus of attention, and that the situation has gotten worse. Bleu immediately goes into PR mode, attacking Forbes and Christofoletti in a profanity laden tweet accusing them of having an agenda.

Not the behavior of someone actually concerned about child porn, is it?

Any reporting or research that contradicts the narrative she is molding is immediately a danger to her task. Bleu has attacked every other in depth reporting on this issue since her initial tweet claiming that Elon had purged most of the CSAM from Twitter. She is on damage control; Her original claims were misleading, and now she will progressively be on the defensive as more of us in OSINT and the media call her lies into question.

Ironically, Eliza's current role makes her one of the biggest protectors of CSAM users on Twitter. Nothing helps them more than Bleu desperately attempting to propagate the narrative that they've been booted off the network when the reality is that they're surging well beyond anything anyone can imagine.

Keep watching her. You'll see the cracks.


"But only post-Musk is CSAM on Twitter becoming a focus of attention in the media."

False. Do queries about this issue prior to Musk's takeover.


The world has become so weird. That elizableu person responded to one of her tweets 'Important to note, Elon Musk himself “liked” a tweet in this thread last night. If this information wasn’t valid, if it wasn’t factual, that wouldn’t have happened.' and turned her story over to 'teslarati.com'


"I cannot contemplate human affairs, without laughing or crying. I choose to laugh. When People talk of the Freedom of Writing Speaking or thinking, I cannot choose but laugh. No such thing ever existed. No such thing now exists: but I hope it will exist, But it must be hundreds of years after you and I Shall write and Speak no more."

-John Adams, letter to Thomas Jefferson, 1817

https://founders.archives.gov/documents/Adams/99-02-02-6790


This is particularly ironic since Adams famously was responsible for the Alien and Sedition Acts and had people imprisoned for criticizing him.


Freedom of speech is an ideal - as such it is not easily achieved or maintained, even by the "best" of us - or those who claim to believe most in it.

If you read Free Speech: A History from Socrates to Social Media you'll find this "irony" repeated throughout history. It's a problem of human nature, and that's what ideals arduously seek to overcome.

https://www.amazon.com/Free-Speech-History-Socrates-Social/d...


Exactly why such values need to be enforced in a decentralized manner. Separation of powers goes a long way in this.


Freedom of association captures the idea of freedom of speech better than freedom of speech itself, I think. Private parties are (with some number of government constraints) free to pick who to work with, who to play with, and who to talk with, and who they don't want to do anything with.


It’s weird to repeatedly see the argument that a corporation’s moderation of human individual’s speech is itself a form of protected speech, from a group of people who until recently mostly thought corporate speech wasn’t even broad enough to cover a corporation’s production and distribution of its own message in a political movie.

Like I think “corporations are people too, my friend” but corporate moderation as a form of protects speech or association takes Citizens United to the next level.


Nobody so far has refuted this weak analysis better than David French did.

https://www.theatlantic.com/ideas/archive/2022/12/elon-musk-...

First Amendment lawyers seem mostly† to be dunking on the idea that there is anything controversial about the protection Twitter enjoys.

maybe "entirely" is the right word here


That article doesn’t address the point I’m making, which is whether content moderation decisions are protected speech. I agree with French’s analysis that the Hunter Biden stuff isn’t a first amendment violation insofar as Biden wasn’t a government actor at the time he made the request.

The rationale behind regulating Twitter would be the same as the ones behind campaign finance laws: to keep a big corporation from using its power to influence elections. Except the difference is that producing a political movie is clear political speech, while the decision to delete particular items from a firehose of user generated content doesn’t seem to be expressing any message in the part of Twitter itself. As far as I can tell, the Twitterati are not addressing the implications of the “must carry” line of cases like TBS v. FCC which hold that forcing a corporation that provides a pipe for content to carry particular types of content isn’t a first amendment violation.

The Twitter internal emails confirm that folks inside Twitter weren’t treating suppression of the Hunter Biden news as a political statement on the part of Twitter. They were concerned about the potential impact on the election.


The figure at the center of all of this, when he was banned, literally just said whatever and started a competing social media site. Concerns about this topic have picked up over the last decade; "we'll carry anything" companies have formed in response. Not that those didn't exist before.

In any case, you can get online somewhere to say what you want. Use Cloudflare. The market surrounding the matter of publishing stuff online is healthy, and the nature of the internet/web is such that you can reach a global audience with relatively minor equipment.

On the consumer side, what is the argument? There is no lack of outlets, serving every niche, freely available on the same connection that delivers you Twitter. What's the problem? That people can't look away from Twitter?


One pretty easy bit of subtext to infer from these discussions: American politics have polarized on education and SES, and the platforms where plugged-in knowledge workers, hipsters, and celebrities tend to hang out are naturally inflected with the politics of those cohorts of people, which really pisses off conservatives who want to hang out in those spaces.


When I am weaker than you I ask you for freedom because that is according to your principles; when I am stronger than you I take away your freedom because that is according to my principles.


There used to be a nice delineation between regulation of commercial and non-commercial activities, with the press, often straddling the divide, enjoying specialized treatment. But political, legal, and technological developments seem to have all conspired to make hash of that old, basic dichotomy.

It's hard to call out hypocrisy without first establishing some coherent principles. I'm not sure what those might be for any of the major political or legal factions. For example, how do we categorize and differentiate health care relationships, for when the government might want to dictate which pamphlets a provider must make available in their waiting rooms or which warnings must be placed on a label. It's roughly similar to the situation with social media companies in the sense of commercial entities mediating private relations, but I suspect a substantial number would find themselves on the opposite side of any hard line drawing.


Should media companies be required to provide a platform for government leaders to broadcast their messages?

To me this is basic 1st amendment stuff. We've gone pretty far down the road to authoritarianism when people think we need to protect the leader of the government's ability to force media companies to carry his messages.


It’s authoritarian when leaders force media companies to carry their messages because it puts the imprimatur of the New York Times or CNN on government propaganda. But Twitter isn’t a “media company” it’s a platform for user generated content. It doesn’t purport to have its own viewpoint, or its own editorial stance. Nobody at Twitter purports to stand behind the content posted by users in the site. In that respect Twitter is much closer to say broadband providers than to media companies. Even most opponents of Net Neutrality didn’t argue it was a First Amendment violation to regulate the terms on which third party content was carried. (Maybe they weren’t creative enough.)

The question is whether the government can require a platform to host otherwise legal content on politically non-discriminatory terms. Put differently, it’s about whether platform owners can use their market power over what’s essentially infrastructure to distort the country’s political debate.

Insofar as many folks on the left believe that corporations don’t have free speech rights period, it’s not “basic 1st amendment stuff.” And even for folks who think Citizens United was correctly decided, a company producing a political movie in its own name is much closer to core political speech than the moderation decisions of a user-generated content platform.


> It doesn’t purport to have its own viewpoint, or its own editorial stance.

The central issue here is whether government officials can override the moderation policies of these companies.

The idea that twitter and other social media companies are just neutral platforms doesn't make any sense. Moderation and promotion of messages based on the content of the message has always been central to their business.

Also, the idea that you can just call a company "a platform" and then the government can just start directing the content of its media operations is rather alarming.

> Insofar as many folks on the left believe that corporations don’t have free speech rights period

To the extent the first amendment has anything to do with free speech anymore, they do. "The press" would have to be understood as an organization, not individual people. I don't know much about your far left people, though.


> The idea that twitter and other social media companies are just neutral platforms doesn't make any sense. Moderation and promotion of messages based on the content of the message has always been central to their business.

The issue isn’t neutrality, but rather expression. The First Amendment protect’s a person’s expression. It protects articles in the New York Times because those articles carry the imprimatur of the Times. When an article is published, the Times is the one speaking.

Twitter may or may not purport to be neutral, but it does not claim that the content on its platform is its own speech. Nor does it even claim that it’s moderation expresses any message from Twitter. Indeed, if they banned Trump because they hate Trump, and were willing to stand behind that position, that might well be protected speech.

But as you acknowledge, Twitter’s moderation and content promotion is just part of their business. They promote content they think users will like and moderate content they think users won’t. It’s like Google’s search results. Could the government prohibit Google from promoting its own products first and hiding the sites of its competitors in search results? Almost certainly.

> Also, the idea that you can just call a company "a platform" and then the government can just start directing the content of its media operations is rather alarming.

The difference between a “media outlet” and a “content platform” really isn’t a fine one, or at least Twitter isn’t anywhere close to the line. Is the company purporting to communicate a message? The New York Times is. CNN is. Twitter is not.


> It protects articles in the New York Times because those articles carry the imprimatur of the Times.

If that were true it would create a loophole so big that the first amendment would practically not exist. The President could require the New York Times to carry a daily editorial on the front page from the desk of the president, with just with an additional caveat "This editorial does not reflect the opinions of the NYT". The President could require TV stations to broadcast speeches from the president speaking in the Rose Garden.

> Could the government prohibit Google from promoting its own products first and hiding the sites of its competitors in search results? Almost certainly.

You're just pointing out that the free speech guarantees of the first amendment aren't absolute, which isn't in dispute as far as I know. It doesn't follow though, that therefore the first amendment allows the government officials to dictate any content they want to be published by media companies.

> Is the company purporting to communicate a message?

Well, when twitter adds a fact-check to a tweet, they are obviously communicating a message.


> If that were true it would create a loophole so big that the first amendment would practically not exist. The President could require the New York Times to carry a daily editorial on the front page from the desk of the president, with just with an additional caveat "This editorial does not reflect the opinions of the NYT". The President could require TV stations to broadcast speeches from the president speaking in the Rose Garden.

If the government is, in effect, simply hijacking a private communications medium to carry a message that's clearly from a different speaker, that's permissible. In TBS v. FCC, the Supreme Court held that cable companies can be forced to carry local channels: https://supreme.justia.com/cases/federal/us/520/180/#tab-opi.... It found the burden on "cable providers editorial discretion in creating programming packages" to be insufficient to raise a first amendment problem.

Under the reasoning of TBS, Twitter has even less of a basis for claiming first amendment protection. Cable companies exercise "editorial discretion" in curating channels to fit into a limited number of available slots. Twitter doesn't need to do that and doesn't purport to do that.

> You're just pointing out that the free speech guarantees of the first amendment aren't absolute, which isn't in dispute as far as I know.

Not just that, but the guarantees are greater or less depending on the type of speech. The protection is very low for commercial speech that isn't purporting to carry any sort of message.


If Paul Clement had kept his mouth shut in oral arguments we might not be here. You of all people here certainly know that.


> Private parties are (with some number of government constraints) free to pick who to work with, who to play with, and who to talk with, and who they don't want to do anything with.

This is so true. The problem we have now is that online platforms are preventing us from making those choices for ourselves. We're being told we're not allowed to talk to certain types of people, often for ideological reasons.

Online platforms shouldn't limit our choices, they should empower us to find whatever content we want and block/remove content we aren't interested in.


Why doesn’t Twitter, one of your “private parties”, also get to pick “who they don't want to do anything with”?


To you and the OP's point, if I am understanding where you are going with this, Twitter previously had a right to remove people it didn't like, as they should. That seems fine. Although there is no obligation to provide any reasoning, I think it would have been better for Twitter to just say "yes, if you are ____ please don't come on here". I don't think it would have even had any consequences. It's not even weird given how social media sites tend to self segregate. Instead we got lectures about policies and disinformation and algorithms and moderation teams. Why not just make explicit rules?


Because the point was to shape public discourse and opinion. Explicitly stating a bias that has been coded into the system somewhat defeats the purpose


It does but because it has been purchased by a free speech enthusiast, it choses not to.

Twitter has been doing exactly what you pointed out for the last several years, shaping public discourse and directing political narratives


Musk already made it clear that he's not an absolutist, but picks and chooses what he wants to allow.

Most prominently, his public justification for keeping Alex Jones off Twitter is that it's personal for him. Which is just fine with me mind you, but doesn't constitute any kind of "free speech enthusiasm".


I'm not sure how aware people (you) are, in general, about Twitter and other big tech (fb, ggle) accommodating the government and political parties to amplify their messages and curtail the distribution of certain other messages.

I really don't know if that's something you or others might know about or consider deleterious to the operation of a free nation. You might be a proponent of those messages and opposed to the messages that previously had been systematically removed from the platform.

The problem is that allowing these things to occur will ultimately be bad for you as well..

I don't give a fuck about Alex Jones


Musk is as much free speech enthusiast as he is astronaut. He is systematically and notoriously punishing any speech he does not like. He did that pretty much anywhere he had power.


Because they want to keep the protection, I think called sec 240, from being liable for what others say using their service.


§ 230 allows platforms (and individual users with moderation tools provided by platforns) to exercise active moderation based on their own view of standards; in fact, allowing that without it invoking liability as a publisher for the platform or moderating users in order to encourage and protect active private moderation is the whole purpose for which § 230 was included in one of the most sweeping internet censorship laws the US ever passed (most of which, other than § 230, related to government censorship and was struck down for violating the First Amendment.)

§ 230 does not require neutrality, and such a requirement would defeat its purpose.


You are being deliberately obtuse. No matter how one answers your question, most internet users still have unaccountable 3rd parties opaquely choosing who they communicate with [1].

Do you admit this is a problem, but you just don't see a solution compatible with the 1st amendment's freedom of association? Do you support the Civil Rights Act's restriction of freedom based on race and/or think it is constitutionally valid? Why would the same logic not hold for restricting companies such as Twitter then?

[1] Spare me rebuttals of "if they really wanted to communicate, they could do so by carrier pigeon!" - 99% of users won't go through with such effort, or even know who they are being herded away from. The remaining motivated 1% is too small to have any political power, and so the censor wins.


> Do you support the Civil Rights Act's restriction of freedom based on race and/or think it is constitutionally valid?

Sure, just as common carrier regulation is constitutionally valid. The Constitution isn't a suicide pact; the Founding Fathers very clearly did not intend it to be one. We've accepted a non-literal wording of the First and other amendments since the beginning.

> Why would the same logic not hold for restricting companies such as Twitter then?

Because being kicked off Twitter is hardly the same as not being able to dial 911 or purchase critical services. We've passed laws to correct specific, significant harms that are nothing like being unable to tweet. We weighed First Amendment rights against the rights of those being harmed in these situations and had to decide which conflicting rights mattered more.

That same process happens here. Different situation, different consequences, different decision.

> Spare me rebuttals of "if they really wanted to communicate, they could do so by carrier pigeon!" - 99% of users won't go through with such effort, or even know who they are being herded away from.

No need for a rebuttal. That's fine.


> Because being kicked off Twitter is hardly the same as not being able to dial 911

The Civil Rights Act also prevents Twitter (or any company) from banning users based on protected characteristics (e.g. race and sex). It is not remotely limited to critical services or common carriers.


We view certain companies that provide essential services as “common carriers” — who aren’t allowed to discriminate based on how you use their product. Common carrier status for essential services is one of the ways we protect freedoms in the US.

Examples: power company, phone company, ISP.

People are starting to view Twitter et al as similar to phone companies — and hence think they should be bound by common carrier rules.


I don’t believe ISP’s are common carriers. And they should be way before Twitter would ever get that designation.


They were, briefly, 2015-2017.


Twitter is not a common carrier, and has significant differences from phone companies.


Perhaps you could state what you think those differences are.


Sure. US law, particularly Section 230, makes it very clear that social networks are not common carriers. Section 230 indemnifies them for liability when moderating material that is "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected".

These actions that would be explicitly forbidden for a common carrier to perform; as such, we can pretty conclusively state Congress never intended to apply common carrier status to places like Twitter.


Right — I said “should” to explain why people socially expect that they should be treated that way.

I didn’t say the current law was that way. Twitter didn’t exist (nor did social networks) when they law you’re citing was written - so it’s quite possible Congress didn’t contemplate this case or express any intent at all.

That would be a good reason to change the law:

If the way it is written doesn’t match how it should apply because the case in question wasn’t contemplated at the time.


I don't think it should, either.

Twitter doesn't have monopoly power like phone companies or railroads tend to, nor are the consequences as impactful. There's zero impact on your ability to publish your personal free speech from being kicked off Twitter.


Most people in the world don't have Twitter accounts and get along perfectly fine. You need phone service to apply for most jobs, contact government emergency services, go to school. It's a far more essential service.

Although the definition of a common carrier isn't necessarily that it is essential anyway. It's that it is offered explicitly as being available to the general public, not involving any sort of individual contacts between carrier and users. Everybody gets exactly the same deal. In that sense, Twitter arguably qualifies, but being a common carrier doesn't mean you can't ban people. Taxis and airlines are common carriers and can absolutely ban you if you don't follow their rules.


I think there’s lots of good reasons to consider Twitter and similar as common carriers:

The presidents and major politicians of many nations communicate via Twitter.

Major businesses, eg Gooogle and its subsidiary YouTube, only respond to customer service via Twitter, eg improper account bans.

There are some businesses, eg YouTube content creators, who you can only contact through Twitter; and some businesses, eg news agencies, which primarily source their content from Twitter.

- - - - -

> Everybody gets exactly the same deal. In that sense, Twitter arguably qualifies, but being a common carrier doesn't mean you can't ban people. Taxis and airlines are common carriers and can absolutely ban you if you don't follow their rules.

The difference comes when Twitter bans the NY Post from receiving the same treatment because they posted a true story that was politically inconvenient for Twitter leadership during an election.

There is more to it than a common carrier can “ban you for not following their rules”. Common carriers have limits on their rules:

A taxi service can’t refuse to take you to a political rally because they don’t like your politics; nor a phone company disconnect you because they don’t like what you’re saying.


Sure, the issue is that every time a political partisan or politically partisan organization gets banned from a service, they and their followers will always believe it's because of the party they're a member of, and not because they were being an asshole. It acts as cover to engage in any kind of behavior at all, as long as you're a partisan. That isn't what those protections were meant for.

Rules are enforced by imperfect detectors of infractions because there is no such thing as a perfect detector. No matter how careful a review process is, failures will always happen. Innocent people are convicted by real courts through decades of appeals in spite of all the due process. You will always be able to find some instances where a member of party A does exactly the same thing as a member of party B, but only one of them gets punished. Does this mean we should try the best we can and accept that we will fail a non-zero number of times, or should we never have rules instead because that's the only possible way to be fair?

I'm going to be perfectly honest here. I don't follow Twitter at all. I've never had an account. I haven't read an article from the NY Post that I can remember in probably 20 years. I don't actually care about this stuff and think it's silly internet drama that should not be dominating national headlines. So it's entirely possible Twitter really was biased here. I don't know. Organizations are not never in the wrong. When black people were banned from every lunch counter in 15 states, it wasn't an innocent mistake by organizations trying their best to exercise their right to freedom of association. When communists were purged from Hollywood, they were definitely targeted because of their party, not because they'd explicitly done anything.

But without stooping to the specifics of whether one single incident was right or wrong, I would like to agree that it is okay for a service like Twitter to have terms of service and remove content and block accounts that breech those terms. Legally, at least right now, they are not treated as a common carrier, but if that ever changes, one of those terms of service can't be "no Republicans," but that also doesn't mean no Republicans can ever be banned. If you think you can prove disparate impact in a court of law, have at it, but just be aware that you're in the ample company of every political partisan ever who believed themselves to be some unjustly persecuted minority being suppressed for just telling the truth. The fact that some small number really were doesn't do much to impact the overall probability distribution of holding that belief.


Unless you choose to not bake a cake for a gay marriage. Freedom of association applies both ways.


Not if you don't want to get downvoted lol. The people in the protected categories get to force you to associate with them, it's really simple.


Everyone is in a protected category. Everyone has an age, a race, etc.


No.


You don't have freedom of association in America since the Civil Rights Act.


I think this comes close to equating capitalist free markets (rules that maximise participation) and anarchy (no rules).

What do you mean by some number of government constraints?


Posting online consists of an individual speaking and a platform publishing it. Social media is not a passive medium like a telephone; it acts as an editor, albeit algorithmically.

In my opinion, the fact that speech online has this dual nature is why there is so much debate about it. Here's a mock interaction...

Individual: "You shadowbanned me. Why?"

Platform: "We don't want to publish you 1 billion times on the internet"

Individual: "Then don't be an editor. Don't give people special treatment."

Platform: "We tried that at first; it did not go well. I can make my platform however I want to."

Individual: "But there are only handful of people making choices that affect billions."

Platform: "The vast majority of these people are not banned or censored"

Etc etc, the debate never ends, because the two sides have opposing financial interests and political opinions/values.


This is perhaps true for social media in general, but less so for twitter. For example, twitter doesn't "publish you 1 billion times on the internet" unless you have a million followers. It's less convincing to say "because we don't want to transmit your message to the 1 billion people who actively chose to follow you".

How does this argument fare when it comes to email? Should I expect gmail to inspect my messages and not deliver them if they don't like what I'm talking about?


The default algorithmic twitter feed is filled with tweets from people you don't follow. Plus there's all the "trending" and other engagement features that boost tweets.


Sure, and I don't think anyone would say "shadowbanning" is when your tweets don't show up in the timelines of people who don't follow you, or in the trending section. It's about down-weighting in the feeds of people who do follow you.


First, I'm not on team platform; apologies if my message implied this.

Most of the controversial decisions by platforms have been about people with millions of followers. I also exaggerate when saying "billions"; it can be thousands.

I think email is different because it is a much more passive technology. It doesn't have a news feed, just things like spam filters, etc


> twitter doesn't "publish you

Worth noting that Twitter just announced that users will see more tweets from people they don’t follow: https://techcrunch.com/2022/11/30/twitter-recommended-tweets...


Good to know. Hopefully it won't affect the reverse chron feed!


Twitter has a "latest tweets" mode, where you see all the tweets from people you follow in reverse chronological order.


You can not be your own platform on the internet. As we have seen, they will go after your CDN, then your domain name, even your personal phone number and email address, then they will go after random ISPs getting them to drop packets to your site.

We can accept that social media platforms can engage in censorship. But net neutrality needs to be restored so ISPs can’t disrupt legal traffic.


Regarding shadow banning: In just the last hour I came across the concept of "heaven banning" which tickles me. https://twitter.com/nearcyan/status/1532076277947330561

(Note: The article in the image is fake, hence the word "hypothetical" in the tweet.)


The first time I read about this, I got a strong feeling of Ellisonian existential dread.


Curious, when you say 'Ellisonian existential dread'.. are you referring to Ralph Ellison (Invisible Man) or to Harlan Ellision, the Scifi writer (e.g. I Have No Mouth and I Must Scream)? Because... it works for both in this case I think.


And what of freedom of association as a right and cultural value?


Exactly! I should be able to join or leave groups discussing any sort of topic or view point I like on social media platforms without having my options limited or my views censored.

If I don't want to associate with someone I can unfollow them, unsubscribe from their subreddit, block them etc.

What matters is that it's my choice to do that. We shouldn't have that choice made for us by the people running online platforms.


If I make an online space (subreddit, FB group, etc) about a specific topic (eg hiking) and some clown comes in and repeatedly posts about unrelated politics, that person should be moderated away.


The motte: wanting to ensure a niche forum remains on topic.

The bailey: getting sites removed from the Internet.


Um what?

So me and my buddy Tim are going to come to your house and talk about how much you suck.

Autoexec: But this is MY private property.

So yes, So are online platforms. They can kick you out at any time they want and you can pound sand. If you don't like it set up own server online to freely associate. You don't get to demand their resources. You don't get access to their network.


> So me and my buddy Tim are going to come to your house and talk about how much you suck.

My house isn't a global social media platform. It's a private residence. Different rules. It's not that small private online groups shouldn't exist either, but we absolutely should expect major social media platforms to uphold free speech ideals and allow people to speak without excessive censorship. We should seek out and support platforms that do this and shun those that do not.

I have no need for another gatekeeper deciding for me what I'm allowed to see and hear based on their personal ideology. A platform like that isn't serving me. If someone wants to create platform as a service for others, that platform should empower the users it serves to find and engage with the content they want and block/ignore what they aren't interested in.


I hope conservatives that use the "social media is a public square" argument will finally approach logical consistency and call for a publicly-owned, federally managed Mastodon site that upholds all the free speech they think people should have.

I think morally, social media platforms should work to stomp out nation-state disinformation campaigns, tell professional trolls to fuck off, and can spam and illegal content. But as opposed to you, I'm not convinced they should be legally required to do anything but the last item on that list.

> If someone wants to create platform as a service for others, that platform should empower the users it serves to find and engage with the content they want and block/ignore what they aren't interested in.

If someone wants to create a platform as a service for others, they can do whatever the hell they want.


> I hope conservatives that use the "social media is a public square" argument will finally approach logical consistency and call for a publicly-owned, federally managed Mastodon site that upholds all the free speech they think people should have.

Not a conservative, but I support free speech and I'd definitely like such an instance. What point are you trying to make?


The stereotypical conservative supports the right of businesses to do pretty much whatever the hell they want, but gets mad when Twitter or Facebook tries to moderate disinformation and hate speech. But they also hate government being useful or providing public services to anyone, so they would likely oppose a government-funded "public square" platform. And surely, they would be upset when that platform inevitably has to moderate hate speech and veiled threats and trolls, who would then have even more credence to complain about "censorship" since the government would be doing it, causing even more shit.

Basically, I am saying conservatives have conflicting views on the rights of businesses to conduct themselves when they try to say Twitter must let them say nazis are cool and vaccines don't work.


As I said, I'm not a conservative, so I can't reply to this.


You asked what point I was making.


> think morally, social media platforms should work to stomp out nation-state disinformation campaigns, tell professional trolls to fuck off,

If you think about this for a bit you'll realize how impossible this actually is. The professional chaos monkeys are going to play both sides so you always loose.


Please show me where these different rules are codified? It's a bullshit argument and you know it. Social media does not make sense globally. You cannot appease both the US government, conglomerate of EU governments, and the Chinese government without being in conflict of what one of them wants. 'free speech' cannot exist with current national laws in most places and would be banned at DNS level, with the more authoritarian governments banning it at the transit level.

You again want access to someone else's network and software you have absolutely zero right to. You use these services under surveillance capitalism without paying directly yourself and make demands of them. Their platform owes you less than nothing and we see this codified in the TOS's that state exactly that.

Disconnect from them completely and take their power away. Run your own services and control what is said on your own network. Demanding that someone else do it is insanity.


> Please show me where these different rules are codified?

Well, for starters, there are laws where I live against billions of people showing up in my home even though billions of people show up on social media platforms. the rules for residential housing and social media platforms are different just as their intended uses are different.

> Social media does not make sense globally.

And yet here we are... on a global social media platform. It somehow works.

> You cannot appease both the US government, conglomerate of EU governments, and the Chinese government without being in conflict of what one of them wants.

social media platforms shouldn't concern themselves with what every government on earth wants. An online service should (generally speaking anyway, this is obviously an oversimplification) only worry about the law says in the country they're located in and it should fall on users to make sure what they post doesn't violate local laws. Some governments won't like that and may block your platform but that's a problem for that country's citizens to sort out with their repressive government.

If every person and service on the internet had to concern itself with the laws of every nation on Earth the internet itself would be impossible.

> You again want access to someone else's network and software you have absolutely zero right to. You use these services under surveillance capitalism without paying directly yourself and make demands of them. Their platform owes you less than nothing and we see this codified in the TOS's that state exactly that.

Within certain legal limitations, a platform has every right to be as restrictive as it wants. It can allow only connections from certain IP ranges. It could only allow people to sign up if they pay them. It can refuse to allow anyone to post anything but the number "8" if it wants to.

What a platform has a legal right to do, and what it ought to do are very very different things. I'm not arguing that a platform MUST under the law allow anyone to post anything they want. I'm stating that a major social media platform SHOULD allow people to discuss the topics that interest them freely and openly. It SHOULD avoid censoring people for ideological reasons.

We, as users of social media, should make demands of the platforms we use. They exist to serve us. We provide the entirety of their content, which those platforms get and publish without ever paying us directly. It is OUR content that drives the traffic and engagement they see. We should reward platforms that follow free speech ideals and enable us to discuss what we like and we should shun platforms that censor us and limit what we're able to see and do.

> Disconnect from them completely and take their power away. Run your own services and control what is said on your own network.

That's the problem with a global internet isn't it. Let's say I do decide to create my own social media platform. I still need to depend on others to host my domain and my servers. I depend on payment processors to pay for those things. I depend on every other ISP to carry traffic to/from my IP. I depend on protocols and software written by other people. You cannot have a social media platform on the internet without depending on others. At every step in the chain countless people have the power to censor things they don't like. It's better for everyone when they don't. Censorship should be heavily discouraged everywhere, but especially online.


> My house isn't a global social media platform...Different rules...we absolutely should expect major social media platforms to uphold...

How exactly is this different from you deciding that all the popular kids at school must be friends with you, that churches have to welcome you into their pulpits to preach, and that Random House has to publish the book you wrote?

Neither the world, nor other people, exist to meet your needs. Nor to implement your professed value system.


> How exactly is this different from you deciding that all the popular kids at school must be friends with you

If we're running with the school/friends analogy the problem we have now is that schools are preventing us from becoming friends with all the popular kids at school. Schools shouldn't control who we are able to be friends with at all. Schools should enable kids to gather together and form friendships with the kids they like, and also allow kids to avoid harassment from kids they aren't interested in without kicking those kids out of school because other kids might want to be their friends.

Social media platforms shouldn't censor what topics we're able to discuss or decide who we can talk to. They should provide a place for users go gather and discuss what they want, while providing a means to unfriend/hide/unsubscribe from things users aren't interested in (while still allowing others to see those things if they want to)

> that churches have to welcome you into their pulpits to preach

If a church puts a giant welcome sign up that invites members to come up to the pulpit to preach, and I find that idea valuable they surely have a right to turn me away for arbitrary reasons, but that probably makes them an asshole and I'd look for another church. A book publisher doesn't have to accept my manuscript, but a world where the only books that can get published are ones that support a certain ideology would be dangerous and undesirable.

Social media platforms can exist to serve the needs of the people, or they can exist to be self-serving. The more platforms that exist to serve the people by providing a space for them to discuss what they like without forcing them to see content they aren't interested in the better off we all are.

We should support social media platforms that exist to meet our needs and we should reject social media platforms that fail to. We, as a people, are best served by social media platforms that respect the ideals of free speech. Online platforms would rather dictate what we're allowed to see and hear, but while that sort of self-serving behavior is common it is also increasingly harmful as the influence of a platform grows and as it becomes increasingly difficult for less repressive alternatives to exist.


> Social media platforms can exist to serve the needs of the people, or they can exist to be self-serving

Why do you think these are the only two possibilities?

Why can’t a social media company exist to serve the needs of only some people? Or to serve the needs of everyone, while also serving its own needs, in a balance decided by the company’s owners?


The undersea cables are private property, also.

What if all of the layer 3 networks got together and decided to not allow you in particular to demand their resources?


Then I would assume that you're screwed. With that said, there's a regulator on IP allocations and their may be something doable on the governance level to revoke said IPs if you're not playing nice with the rest of the world.

As for the actual server resources, you have not established where you have any legal right to them.


Perhaps, beyond a certain level of monopoly, there's more of an obligation to provide equal access to service for all?


What obligation are you referring to? Wanting something does not make it reality.


> If you don't like it set up own server online to freely associate.

How exactly does your server get online without utilizing the private property of some entity?


You don't.

With that said we typically treat transport (phone lines, network connectivity) different than applications, and absolve the carrier of the traffic from the content of the traffic itself. With non-transport hosting it gets more complicated quickly.

But until the time we adjust our laws (and good luck with that) it's going to be difficult to nullify property rights and freedom of association of the property owners over the users of the services.



TL;DR (I did but very quickly) the vast majority of for profit entities do not give one shit about free speech. If you think any online forum does you are in conflict with all evidence presented so far.

But it is an important point not to have just one site like Twitter or Reddit or even HN so we can go to one the competitors and complain about the former without angering the later.


I think that the most virtuous third party platform is the one that allows it's users to associate with each other with the least intervention probably with some limit such as the explicit threat of violence being moderated away. Seeing where the dice land on each social media website, I'm almost always disappointed that the websites are too heavy-handed. I suppose that "4-chan-style chaos as a business strategy" is less profitable than what they currently practice though.


Freedom of speech exists in dozen of countries but "Free Speech (TM)" is a very US specific thing that seemingly generate an infinite amount of non-sense. People get lost in abstraction and start to think that giving a megaphone to fascists like Twitter did even before Musk is somehow good because the best way to defeat fascism, when you live on planet Mars, is on the "marketplace of ideas". On planet Earth this is not at all how it works, the rise of fascists correlates directly with diminishing freedoms, and to defeat fascists you need the Red Army.


Political speech, especially unpopular political speech, is among the speech that I think is most deserving of free speech protections.

I specifically want the political speech of the people most ideologically opposed to my point of view to be able to make their speech, free from government intrusion on it. I will almost surely not like it, but I will defend vigorously their right to say it.


What if my political speech is inherently violent? Would I be allowed to support a political group or ideology that directly called for ethnic cleansing?


Yes. And other people should be allowed hn criticize it.


You can't be stopped from supporting them, only telling me that you support them. That's basically the problem.


I think yes, it should be allowed.


> I specifically want the political speech of the people most ideologically opposed to my point of view to be able to make their speech, free from government intrusion on it.

Neat. But I'm under no obligation to carry that shit in my blog's comment section, social media platform, or any other service. If you want to stand up some service you're under no obligation to carry my speech that you don't agree with either.


I completely agree.


This not only begs the question, it sets up a situation where expression is entirely controlled and curtailed by those at the levers of power. It is a recipe for an entirely unfree future, one where right and useful ideas will not be allowed to see the light of day because they will cost someone else money and/or power. That is not a future I would want to force on anyone. This is why so many oppose freedom — they're fearful of Bad Things, not realizing that while there will certainly be some negative consequences to freedom in speech or in living, squelching those will also destroy much more that is good and helpful.


And to defeat the Red Army you need lizards, to defeat the lizards you need Chinese needle snakes, to defeat those you need gorillas, and when winter comes around, those freeze to death!


> But many countries where Twitter operates are no friend to free speech

Isn't that every country? I'm not American but to my understanding speech calling for direct violence is illegal the US, where Twitter is based.


Free speech is a spectrum. Speech can be more or less free but when used on its own it usually refers to the freeness level allowed in the US.


But why should US legal allowances/restrictions be the baseline for the global cultural definition of free speech? And seeing the current discussion of this is focused on Twitter, ~75% of it's users aren't in the US.


The US is used as a benchmark because its ideas about legal rights are culturally influential and it’s where the headquarters of Twitter is.


Indeed, Twitter has a legal right to moderate how it pleases.

The question we should be asking is HOW content is being moderated. Shadow moderation, when a forum tricks authors into thinking their removed or demoted content is publicly visible, is an abridgement of free speech culture we should be addressing. I recently gave a talk on this [1] which led to some discussion on HN [2]. The wider public is generally unaware of the degree to which this happens— to all of us.

I'm pretty sure Twitter already shadow moderates content. My reply here [3] only shows up when directly linked, not under the parent tweet [4], and it wasn't hidden by FIRE.

This is openly admitted when platforms say "Free speech but not free reach" as in the case with Musk and Twitter, or when they talk about raising or reducing content as in the case of YouTube [5].

[1] https://cantsayanything.win/2022-10-transparent-moderation/

[2] https://news.ycombinator.com/item?id=33475391

[3] https://twitter.com/rhaksw/status/1594103021407195136

[4] https://twitter.com/TheFIREorg/status/1594078057895063553

[5] https://blog.youtube/inside-youtube/the-four-rs-of-responsib...


The "sides" of this argument have flipped. The pre-Musk arguments about Twitter's responsibility as a "public square" are now about how Twitter can do whatever they want as a private actor. And vice versa on the other side.

I wonder if the left will be able to take advantage of Texas' social media law[1] now that leftists accounts are being banned[2].

[1] https://www.nytimes.com/2021/09/09/technology/texas-social-m...

[2] https://theintercept.com/2022/11/29/elon-musk-twitter-andy-n...


Note that neither the left nor the right have uniform opinions.

The people advancing one arguement pre-Musk might be different to the purple advancing the argument the opposite way, even on the same political side.

But I agree that on aggregate, the same logic is now being applied to form opposite conclusions.


Children taught by the SS would send their parents to prison because their parents assumed free speech in a private setting.

Once you realize that these abuses are inherent to a system that doesn’t culturally accept free speech, then you see the danger of the current moment more clearly.


This controversy only exists because there's no "public square" of the internet. Every online venue for speech is private. It shouldn't be that way. We should have federal platforms that charge users at-cost and whose only rules are the laws of the US. Every account linked to the real identity of a citizen and violations of the law the same as those in real life: a visit from the cops. You deserve an e-mail address that can't be taken away from you without a court order.


> The White House is free to make the argument that Twitter should police “misinformation” and “hate speech” on its platform. But it has no legal basis to say that Twitter must do so.

False. Or, rather, it's false if you want Twitter to maintain its liability shield in Section 230 of the Cojmunications Decency Act [1], specifically 230(c)(2). Without this, Twitter becomes liable for any content. This is of course US-centric. Different countries have other requirements.

> Musk may not be the best — or most consistent — messenger for free speech. And you may not agree with his interpretation of free speech.

We all know what Musk means when he says "free speech". It's the same as when any conservative says "free speech". It means "hate speech". It means not wanting to get banned for spouting transphobia (in particular), homophobia, racist screeds, misogyny, etc.

> If we care about an America whose support for free expression goes beyond the law, we must support a culture of free expression.

No, we shouldn't. Every time some variant of free speech absolutism has been tried, the results are always the same: it fills up with Nazis. Everyone else leaves. Even 4chan has a ToS (basically "no CP"). That's the place for unhinged hate speech and conspiracy theories.

Platforms don't want to be known as being a Nazi hotbed. Advertisers flee. Beyond that however platforms should consider what's best for the total user base. Allowing a few extremists to spew hate speech in the name of some ideal of free speech culture at the expense of everyone else is narcissism personified.

I'll close with noting the paradox of tolerance [2].

[1]: https://en.wikipedia.org/wiki/Section_230

[2]: https://en.wikipedia.org/wiki/Paradox_of_tolerance


Is it transphobia to argue if children should have access to surgery? To argue about puberty blockers? To argue about therapies different than transition for dysphoria?

Society is changing, and people want to discuss it. To be heard. Yes, that makes change harder and slower. But you absolutely do not get to choose the "right" solution and call everyone that is concerned and unconvinced "transphobic"

Eg: https://www.kjrh.com/news/local-news/oklahoma-lawmaker-files...

Pretty weird if something can be discussed in legislature, but not on twitter

I don't know that I agree with him. I'd need epidemiological data. But that is not the point. The point is that we, as a society, need to discuss.


> Is it transphobia to argue if children should have access to surgery?

It is transphobia to deliberately and knowingly lie that children are getting gender confirmation surgery. Because they aren't.

> To argue about puberty blockers?

Puberty blockers are non-invasive and reversible and given as medical care for harm reduction. Why do you think what medical care someone gets is anyone else's business? More importantly though, such discourse is never in good faith.

> Pretty weird if something can be discussed in legislature, but not on twitter

It's still misinformation and hate speech when it's "discussed" in bad faith by reactionary state legislatures.

> The point is that we, as a society, need to discuss.

First ask yourself why anyone else gets a say in another person's medical care. It's really no one else's business.


The most interesting thing about your reply is that you are, yourself, parroting misinformation.

Children are being subject to surgery, and puberty blockers are not “non-invasive and reversible”.

This is trivially verifiable and well-documented, including, most recently, in the New York Times; that you’d label objective, obvious truths as “transphobia” demonstrates exactly why we need robust, open discourse.

https://www.nytimes.com/2022/11/14/health/puberty-blockers-t...


Honestly I completely agree with you. Even though the wpath standards of care for transgender people clearly shows the benefits outway the potential downsides, and those downsides haven't even been proven to exist yet due to lack of data; I don't think it's fair to children to force them to grow without hormones for years. We should really be allowing trans children to start hrt much sooner, that would deal with all the potential problems of puberty blockers


Seems to me you managed to post this on a highly moderated web site.

If this web sites allowed flame wars would we all be better informed on this topic?


> Children are being subject to surgery

There is not one single mention of children being subject to gender-affirming surgery in that article. At all.

You have been continuously repeating this misinformation and have failed to produce one single shred of evidence to suggest it is anything but.


The question seems to hinge on the definition of child. Totally my fault, I said "should children get access to surgery". The productive way to go is to talk about ages and interventions available.

What we can affirm is that: people in slovenia can have sex reassignment surgery at 15, in scotland at 16. In most EU countries, after 18

https://fra.europa.eu/en/publication/2017/mapping-minimum-ag...

And the we can ask if those policies are generating more happiness than suffering, or more suffering than happiness.

The NYT article mentions a 16 year old getting a mastectomy, hormones, and then regretting it. I would consider a 16 year old a child, but you might not, and that is fine. The useful question is still: was it too soon? Maybe not! Maybe she is just unlucky, and mastectomies of 14 years old will make the world a better place.

But it is a discussion, and people will not accept that they don't get to ask questions and have opinions.


You might have misunderstood Section 230(c)(2)

Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

I.e.: you can remove some content without being considered an editor reponsible for all content


I have a lot of respect for FIRE. Hell, I designed and built their website over a decade ago, when the organization was primarily focused on speech codes on college campuses. They were doing great and important work then, and continue to do now.

It's also nice to see them attempting to separate "free speech" into two buckets of meaning, the legal and the cultural. It's a point that gets muddled.

However, the article, like a lot of "free speech culture" defenses I see, fails to explain why "free speech culture" has to, essentially, be "one way". The freedom of someone to say something, but without the freedom for someone to speak against it. If the consequence of saying something is a lot of people mocking you for it, that can be, and often is, just as chilling as any specific action.

It also fails to discuss, at all, how actions and speech are somehow distinct things. If you're saying something I disagree with, am I morally obligated, within "free speech culture" to sit there and hear you say it? Am I morally obligated, within "free speech culture", to support businesses which publicly say things I disagree with, or things which specifically target me, or my family? If not, isn't then the monetary consequence of "free speech" potentially chilling?

And if people should/must be free to speak against speech they disagree with, and if people should/must be free to deny business to businesses they disagree with, then isn't the "free speech culture" defense just a disagreement over whether someone, or some group, is right in the speech they use, and actions they take? The argument isn't about broader principles.

Elon Musk took over Twitter and disagreed with speech, and actions, the previous owners took. He reversed course. He's "free" to do so. He also took issue with speech and actions the previous owners didn't take, and banned accounts whose speech he disagreed with. He's free to do so.

Individuals are free to speak against that. Individuals are free to take their business elsewhere because of that.

That is, as far as I can tell, what a "free speech culture" should/must mean.

Finally, I took note that their most compelling "you have to be for 'free speech culture'" cautionary tale in the article was explicitly not about private individuals, or companies, but a government's (CCP) ability to pressure private companies – something which is explicitly rebutted by "free speech law" as bounded by the author.


I agree. People being able to call out and criticise the views of others is an important facet of free speech.

I see the problem as:

* disproportionate justice such as dogpiling

* trying to get people fired, by applying pressure on employers

* trying to de-platform, e.g. removing accounts and websites

If it was just replies, there wouldn't be a problem.


All of those actions are replies.



Free speech should be paired with free action. Why would telling a lie be treated different than commiting a crime?


Obligatory XKCD: https://xkcd.com/1357/


I'm curious why these discussions sidestep the fact that a company has a fiduciary duty to maximize shareholder value. It is understandable, from a shareholder perspective, to not want your investment's brand next to potentially unsavory content. It seems that there is a real and unresolved tension between a culture of free speech (which I wholeheartedly endorse) and modern-day capitalism.


I've heard this argument a few times and my response is that I don't think it's that simple in the case of Twitter. Abstractly I agree that the limit of a corporation's tolerance for unsavory content approaches the politically correct mean as time goes on. But I honestly feel that Twitter was nowhere near that mean and their moderation decisions were far to the left essentially excluding over 50% of the population. I don't believe that people pulling their ads are doing so because they think it's a good strategy or because they have issue with any content currently on Twitter. I honestly believe they've done so to their own detriment as part of the culture war that seems to surround Musk. In time the value of advertising will speak and the fears that Twitter will turn into a cesspool will prove unfounded and the fiduciary duty of the people who pulled their ads from Twitter to their own company shareholders will kick back in and they will return. If Twitter was say 40% agreeable under it's previous moderation team, I believe there's room for it to grow to 80% agreeable by tolerating more diversity of thought.


No need for weird conspiracies. An actual ad executive explained to Musk what problems he had. Musk proceeded to get pissy and block him, which I'm sure the rest of them didn't find very reassuring.

So far what I've been hearing is that the ad industry considered Twitter to be a bad place to advertise to start with.

Then Musk came in. First thing he does is to shout that there's too many bots on the platform, which I'm sure is just the thing one wants to hear when advertising.

Then he fired a lot of people, which seems means that Twitter is now hard to advertise on anyway, because internal systems don't perform well anymore and people used to talk to got laid off. And Musk is heaping in extra controversies on top.

Musk is simply incompetent at running this particular business.


You seem to be the one embroiled in conspiracies. Removing bots from the platform will make it more valuable and effective for advertisers in the long run. Firing deadweight will make it more lucrative in the long run. There’s a saying: “when you remove your hand from the bucket of water, the void will fill”.


No, I'm saying that one of the first things Musk did was saying "This has a lot more bots than I thought at first". He wanted to bail out over that. Advertisers of course saw that, and that didn't make Twitter any more attractive to them.

Yes, firing deadweight might help. Doing it immediately, before figuring out who's dead weight and who is not, that was the stupid part.


I think this is very good analysis with respect to Twitter. I suppose I'm really interested in how firms do their free speech vs. profitability cost-benefit analysis with respect to markets and environments where speech is restricted. For example (and I'll admit this is a glib argument) I am curious how Musk would reconcile his free speech absolutism with Tesla being active in China. Open to hearing thoughts on that.


Companies don’t have a duty to maximize shareholder value.

See here, for one discussion: https://www.nytimes.com/roomfordebate/2015/04/16/what-are-co...



The first link in that article (that the law in Delaware is clear about corporate purpose) is to the author’s own writing elsewhere.

Not that it makes him wrong, but citing yourself is not a great look, IMO.

I appreciate the link.


Profit != value.


A publicly held corporation does have a fiduciary duty to maximize shareholder value, but managers often do things that they can argue might maximize shareholder value in opposition to what the shareholders believe. The managers almost always get their way, even when they turn out to have been wrong, or even when they never sincerely meant to maximize shareholder value. Sometimes shareholders rebel and sue. Sometimes the lawsuits work out in the shareholders' interests.

As to the specific matter here, Twitter is now a privately held corporation. It has no such fiduciary duty. The lenders can presumably call their loans if they think Musk will bankrupt the company. Musk can legitimately believe that his vision regarding free speech will maximize the company's value, and he could be right or wrong, or he could be making it all up as he goes and not be sincere about anything, and he gets to. I'm not a mind reader, so I won't hazard a guess as to what he thinks about freedom of speech and profitability.


Let them eat gay wedding cake.


For real, why bother speaking when there's all-you-can-eat cake?


And not just gay cakes, we're going to compel bakers to make "It's ok to be white" cakes against their conscience. And let them eat those, too.


[flagged]


> I think he and the Cato Institute want to create a world where their funders can tell lies with impunity.

Free speech only makes it harder for people to tell lies with impunity since people are then free to discuss your lies and call them out for what they are.

"Sunlight is the best disinfectant". By all means, get those lies out in the open so we can publicly tear them to shreds, even better when online platforms are themselves fact checking posts.


This has a dependency on the public's skills in distinguishing between True, False, and (rarely an option available to most) Unknown.

There is also reason to believe that governments and corporations may prefer that people aren't very capable in this area.


There's some truth to this, but what's the alternative? Do we design communication systems with the assumption that all people are incapable of telling fact from fiction just to make it easier for those who can't?

I think it's far better to give people the tools to work out what is true/false and to debate difficult topics (even those without a clear answer) in a transparent manner.

I'd agree that both governments and corporations would be thrilled at the idea of a population that has no choice but to accept whatever they are told, but I think that as a people we're better served when we have the option to develop the skills to think critically and online platforms can play an important role in that.


> There's some truth to this, but what's the alternative?

I propose we try something similar to what we did when there was an undesirably low level of literacy in society: mass education. But in this case, the subject would be philosophy.

> Do we design communication systems with the assumption that all people are incapable of telling fact from fiction just to make it easier for those who can't?

I would say yes, but I would drop the "just to make it easier for those who can't" and replace it with something like "because an undesirably low level of people in society have substantial skill in logic, epistemology, rhetoric, etc, and it is at least plausible that this state of affairs could have negative consequences, including with regard to 'existential' problems like climate change or the preservation of 'democracy'[1]".

A big problem is that people tend to have pretty strong beliefs about their capabilities in any given domain, and the source of this confidence is very often substantially based on intuitive self-assessment, the output of which is a function of the very skills in question.

> I think it's far better to give people the tools to work out what is true/false and to debate difficult topics (even those without a clear answer) in a transparent manner.

100% agree. Though, we already have tools that could support that activity (HN is one such example), but they currently have no means of insisting that people do it (unlike in a classroom where unruly/etc students who are downgrading the learning of others can be and are asked to leave, in an adequately skillful way (sufficient to accomplish the goal)).

> I'd agree that both governments and corporations would be thrilled at the idea of a population that has no choice but to accept whatever they are told, but I think that as a people we're better served when we have the option to develop the skills to think critically and online platforms can play an important role in that.

Oh, humanity certainly has this option, it is not prevented by the laws of physics anyways. But having an option available does not guarantee that it will physically manifest - someone has to actually make it happen. Ironically, in the past I've run some of these ideas by moderators here and they....didn't have a lot of (even abstract) interest in the idea....which to me is a sign of...something.

[1] With luck, perhaps some day some non-trivial/adequate amount of humans on the planet would rise to a level of ability that they would be able to competently and accurately discuss the degree to which our "democracy" is actually democratic, a highly contentious and rather important topic that is absolutely butchered in any conversation I've encountered.

Free speech is another excellent example of a topic where most people (including genuinely smart people) simply lack the training required to discuss competently:

https://news.ycombinator.com/item?id=33823725


The problem is the DOS attack on culture where bad actors flood the zone with bullshit.


It's tempting to think that preventing an idea from being discussed online means it will go away, but that's never been the case. It's much better to have it be challenged and corrected publicly.

It really can be difficult to keep up with all the lies being told, but automated fact checking can be a great help here, and the more a lie is seen the greater the odds it will be challenged (provided we have the freedom to challenge it)


Like HN without moderation!


moderation is still important, but it too shouldn't be excessive.


When everyone’s shouting disinformation, fact checking often falls on deaf ears. There’s no evidence that sunlight is actually the best disinfectant; if anything, making it easier for conspiracy theorists to find each other has just expanded their echo chamber.


> There’s no evidence that sunlight is actually the best disinfectant;

We've never found anything that works better. For any given idea discussing and examining the positions for and against it is still the best way to get to the truth. The more transparent that process is, the better.

Conspiracy theorists don't care about facts or truth. They will act as they do regardless, but free speech ideals make it very difficult for echo chambers to exist because all ideas can be publicly challenged.


Your assertions are so wildly counter to what I've observed that I'm shocked someone could actually believe them.


there were, numerically, basically no alt-right people 20 years ago. now regular assholes are being infected by this stuff and a huge percentage of the information storm coming at me every day is either angry, an outright lie, a reposting of something hateful or untrue, and "sunlight" isnt helping anything. we are creating an insane landscape of anger, loathing and unreality, and now more and more public figures are amplifying it, including the owner of a major social media site.

sunlight being disinfectant is not true. stochastic terrorism driven by media hyper engagement is definitely true.


> there were, numerically, basically no alt-right people 20 years ago.

Although it's never had a good one, for any reasonable definition of "alt-right people" this isn't true at all. It's the same racist far-right folk as always, just now online (like we all are).

> now regular assholes are being infected by this stuff

Not really. The idea of racist ideology as an infection is dangerous and simply wrong. You could spend all day every day listening to racist propaganda and hate speech and you'll never suddenly wake up thinking some people are better than others because of race. If everyone who listened to hate speech became mind-controlled into being "radicalized" researchers and anti-racist activists who do follow that stuff would have a massive problem, but it doesn't happen. There are people who are vulnerable to falling in with that sort of crowd, but even then it's not the message that hooks them.

> a huge percentage of the information storm coming at me every day is either angry, an outright lie, a reposting of something hateful or untrue and "sunlight" isnt helping anything.

the internet has enabled easy global communication and some of what we get is malicious and manufactured to deceive, but mostly it's a distorted reflection of what people are feeling in real life. Social media platforms encourage the worst in us, and reward exaggerated anger and extreme inflexible positions. That's largely to do with how social media platforms are designed and the algorithms that drive engagement. We don't need massive amounts of censorship to solve that problem.

Sunlight is helping!

I've seen people (even on this site) post information only to be corrected by others. I've even seen some of those people acknowledge that they were wrong and start to question their sources!

Having hate groups in massive online communities made it easier than ever to keep an eye on them. It's enabled us to see what lies were being spread and get fact checks published to increase exposure to the truth before many people were exposed to the lies (research has shown it's far more effective to inoculate people with education before exposure to misinformation than it is to get them to change their mind on something after they've been misled).

Because many of the traitors planning to attack the capitol on Jan 6th were on well known social media platforms like Facebook and Parler police and researchers were able to use those posts to identify and prosecute people they wouldn't have been able to otherwise.

It all comes down to this: You can't fight against something you aren't allowed to see. As long as people are able to communicate (online or offline) they're going to spread misinformation. We shouldn't just sweep it under the rug so that we can pretend the problem is solved. We have to confront it directly and openly even when it's uncomfortable.


How do you think the conspiracy theory community would rank in terms of who produces the most misinformation?

There are many different metrics that could be considered, so maybe that would be the best place to start if we were interested in having an actually serious discussion on the topic.


The alternative is well-funded and well-connected institutions like the Cato Institute and other think-tanks being treated as authorities that the media and government can launder "the truth" through, and anonymous Internet commenters proving them wrong being censored.


Alternative? This is how it is today, only the anonymous internet commenters are just ignored rather than censored. Plenty of good and extremely popular arguments are made on internet forums, few see the light of day because internet commenters have no agency and people are unwant to engage in widespread labor strikes to enact change.

https://en.wikipedia.org/wiki/Propaganda_model


[flagged]


I think it would be the “cultural value” piece


There's probably not much point explaining the cultural value of free speech to somebody who thinks free speech is so awful they can't even bring themselves to mention it without using a minced oath, as though they fear even the term itself.


'freeze peach' is a dig at the blatant disingenuity of some self-styled 'free speech absolutists' who are actually dedicated trolls out to upset people. Since you're new here this trope might not be familiar to you.


I'm very familiar with it. It's a minced oath used by those who disrespect free speech so intensely they suspend any pretense of decorum and resort to childish wordplay. It's the same as FOSS activists who insist on writing "M$" instead of Microsoft.


Restating your earlier opinion in more forceful terms is less than persuasive, but you're welcome to keep at it.


And your position is... "this is an obscure HN meme" (it isn't.) And "therefore you're wrong". Very persuasive.


I didn't say it was a HN meme. I think you could benefit from reading the HN guidelines if you want to have a good time here. They're in a link at the bottom of the page.


Free speech doesn't mean you can say whatever you want. It means you can speak with whom you want.

Free speech as codified in Bills of Rights is there to stop the Government from censoring people, usually opponents of the current Government.

The idea that you can speak whatever you want is ridiculous. There are many laws in the way of this, Assault, Libel/Slander, Perjury, etc


how is "you can speak with whom you want" sensible?

you definitely can't. even if you mean "you can speak with whom you want who also wants to speak with you"... well freedom of speech kinda comes down to what you'd be talking about. dictatorships don't prevent people speaking because of the people, but because of what they think they will or have said.


> We need a free speech culture to reap the benefits of free speech law

No we don't. This essay is trying so hard to say that while de-platforming may not be government censorship, it's just as bad. But it's not. In China right now, people are protesting in the street with blank signs because they aren't allowed to say anything about anything in public, and they are still getting arrested. There's a sort of slippery slope argument given in the article that we shouldn't be headed in that direction. But Twitter's content policies, for example, are in line with the culture and laws of Western nations as a whole. My understanding is some countries have hate speech laws that the US doesn't have. There is no "slippery slope to China," just the US wanting to be a rough-and-tumble outlier. It's the same kind of slippery slope argument that says having "normal" healthcare like other Western countries would make us communist or something.

Social norms are changing; that's not fascism. That hyperbolic metaphor has gotten out of control. We have a republic "if we can keep it," as the article says; losing our democracy to an authoritarian regime permanently (or for a long term) would entail a level of real suffering that is not comparable to (and has nothing to do with) it becoming permanently socially unacceptable to use a racist slur, say, in public discourse. There are no brilliant "ideas" embedded in sheer bigotry that we are missing out on, and giving less airtime to hate speech or misinformation is nothing more or less than that.

At the very least, the article claims to elucidate a "distinction" but actually blurs several things together, such as censorship; "canceling" (which can mean a lot of things but is sometimes just a simple result of public backlash leading to a TV show being canceled, and you can't force people to like some celebrity who committed sexual assault or is racist etc etc and see them the same way as before, or treat having a talk show on TV with advertisers as some kind of fundamental human right); and what ideas are considered worth discussing at academic institutions (which have always had their idiosyncratic preferences about what ideas merit discussion and research, I'm sure).


> Social norms are changing; that's not fascism.

Maybe they are, but what I've often observed is that a vocal minority of believers are shouting down any and all non-believers.

You're looking at a small mob and thinking that they're representative of all of society.


Then maybe you need a free "Speech Culture". Meaning, you need to be free to choose to have the Cleansed Speech or the full fat experience. Now that's a freedom that 18th century people did not think about.

It always surprises me that people never question the writings of the constitutions as if they were godsend in a monolith.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: