It's worth noting that a very old user (/u/andrewsmith1986 IIRC) responded that at that time it was possible to add arbitrary users as mods without needing interaction/feedback/etc. from the user in question. If that was the case, then any user being a mod on any particular sub at that time doesn't really mean much.
Obviously I can't reference the comment in question right now, but I'll try to remember to circle back and add a reference when(/if?) Reddit comes back up.
I remember the change to requiring people to accept an invite. Adding people as moderators of distasteful subreddits without their consent was a common form of abuse just prior.
And it would in turn be worth noting that the creators of reddit had a philosophical and political commitment to free speech that drove their light-touch approach to moderation. It's not like the existence of that subreddit is evidence of an endorsement on their part.
> a philosophical and political commitment to free speech
Surely this is a meme by now? Any CEO that has ever said this about a website they control is just pandering to the crowd. Musk's Twitter has complied with more government takedown requests than the previous regime had.
It is now, but it was very different at the time. The old guard of the internet supporting absolute freedom of speech didn’t used to be associated with Nazis. Some good examples that still exists today are the Electronic Frontier Foundation and to some extent the American Civil Liberties Union.
Would you mind explaining what #Stolzmonat means or what is behind it? I see that it is german for pride month. Is it meant to be ironic or something like that?
It's a counter-protest by white male supremacists ("neo-nazis"/Proud Boys) who believe that diversity and equality diminish them. It's a parody of Pride Month.
It is pointing out the degeneracy that plagues society in a lighthearted manner, by using the German flag, which could easily be interpreted as a new iteration of the gay pride flag to the untrained eye
I'd be all up for an initiative that attempts to burst the liberal bubble which seems to reject the possibility that people have other things to do than follow the latest inclusivity trends online, but come on. You know well what this is about. Had to scroll like 3 posts down to find out that this panders to reactionary clowns[0].
If this were a protest I would expect polemic discourse, yet the "official account" is just throwing shit around and screaming with excitement when someone "gets offended"[1].
This is partly why I now prefer the term "free exchange of ideas" over "freedom of speech". I believe it more closely captures the essence of what makes free speech worth protecting, while also conveniently excluding stuff like this (among other things, like spam or antisocial behavior).
It also makes it clearer what's going on when people are waving the "free speech" banner over things like harassment and abuse. Allowing abuse not only doesn't increase the free exchange of ideas. It also often directly decreases it because it drives off people who get targeted by racism, sexism, et cetera, ad nauseam.
When the use cases for a given signal have proven to be reliably overwhelmed by the abuse cases for that same signal, the signal is no longer making a net positive contribution to freedom, causing net harm rather than help. At this point I don't think such a signal should be protected as free speech.
Except it won't work. Freedom of speech limited to well composed political speech is not what it sounds. Modern day China(PRC) guarantees in the constitution that kind of freedom of speech, just they label any speeches that isn't conformant to current Party statements to be terrorism conspiracy. Even Nazi Germany had that constitutional clauses.
It just starts from porn and ends in gas chambers. And in the middle is freedom limited to "meaningful" contents and activities. If you looked up and compared state of freedom of speech, CSAM/CP regulation, internet censorship, especially authority intervention, crime rate, and standpoint on democracy-totalitarian axis for various random nations, it should all line up well against each others.
So you haven't seen how a real communist party label opposing voices as not-a-speech? They literally do that. You'd think the problem is in their arbitrary mislabeling, not the selective application of freedom, sure, it isn't a problem at all so long you're the chooser.
Thank you; I was wanting another term so as to not conflated "freedom of speech" with something considerably different. I want to make things unambiguous/clear as possible.
It does infuriate me when some people may use "freedom of speech" as their excuse for "You must let me have a place to speak", when that isn't even guaranteed in the first place.
Freedom of the press does often get lumped in with speech in these sort of discussions, for good reason. That's another reason I prefer "ideas"; it's agnostic to the medium through which the ideas are conveyed (though again, in this specific case the images in question weren't intended to convey an "idea").
That doesn't really change much. Even if you relegate it to "free discussion of ideas" someone still can bombard it with bullshit ones "backed" by some circular reasoning, with backers either unable to comprehend or wilfully ignoring any logical counter-arguments. Get enough of those people and you get toxic wasteland
Freedom of speech is a legal concept that clearly doesn't cover CSAM. Free speech is a principle but it also doesn't cover CSAM. Fire in a crowded theater doesn't actually work as a legal defense but obscenity does
That's another reason I prefer the term "free exchange of ideas"; by using different wording it helps avoid the confusion created by people conflating the general principal of such freedoms with any specific legal provisions that exist in the U.S. constitution. (Though I agree in this case my wording is in agreement with how the courts define "freedom of speech" in practice.)
As discussed ITA, that case was later overturned; and it’s worth remembering that its origin was as an analogy to justify criminalizing pamphleteering against the draft [0] (one could not imagine a more salient example of “political speech”)
Copyright/trademark violations, shouting "fire" in a crowded theater, direct personal threats of bodily harm, basically a lot of stuff which is already widely considered to not be protected speech legally, but which is "speech" (or perhaps "press") in the plain language sense of the term (and importantly, is not exactly an "idea" in the plain language sense of that term).
I think it helps strengthen the argument rhetorically, since people can't as easily use the existence of such "exceptions" as an argument for adding more, or bypass the principle entirely with slogans like "freedom of speech isn't freedom of reach". (Suppressing the "reach" of certain ideas obviously does inhibit their free exchange.)
This approach hasn't been "battle tested" yet though so we'll see how it goes in practice.
It doesn't seem to do anything other than play word games with what counts as an 'idea' instead of what counts as 'speech'. An attempt to reset the existing case law, philosophy, etc., without facing it head on. Perhaps we need to reconsider if images count as speech. What about algorithms? Is an algorithm an idea? Even if it is an algorithm that generates an image?
The issue I see developing is that any attempt to carve out what is not desired by some group is going to create standards that will let other groups carve out what wasn't intended in the first place. Look at how encryption is under attack and one common way it is attacked is by claims of how it promotes the spread of CSAM. So government asks for reasonable backdoors that will only ever be used to stop such material, yet tech circles realize that any such backdoor will allow for arbitrary power to block any material.
I think flag burning and provocative art are unquestionably intended to convey "ideas". In fact, they fit into that category far more cleanly than they do into "speech" in my opinion.
Nudity... depends on the purpose but probably not. I agree that's unfortunate if your stance is that there should be no restrictions on porn, but I'm not sure the arguments for why freedom of speech is a good idea really apply to porn in the first place. I think it'd be better to make that argument on its own merits rather than try to conflate the two.
> I think flag burning and provocative art are unquestionably intended to convey "ideas".
That seems very open to interpretation to me; it seems to me these kind of things express a "feeling" much more than an "idea", and they might also be considered "antisocial" that you mentioned in your previous comment.
I meant "nudity" only in the sense of "nudity", nothing more. e.g. "I want to make a nudist TV cooking programme", or stuff like that. No concrete "ideas" are being exchanged with that as such.
I'm not a free speech absolution by any means, but I have generally favoured the exact opposite: "free expression" instead of "free speech" as that covers so much more. I think we can have an expansive "free expression" which includes many things while also having reasonable limits on that based on e.g. "does this reasonably harm people in a significant way?"
Yeah, I suppose there is some ambiguity there. Though I'd argue a standard like "does this harm people?" would even more open to interpretation and prone to abuse. Just about any idea can be framed as "harmful" to some person or group given a sufficient level of motivated reasoning (in fact, almost all modern cases of mass censorship seem to try to justify themselves that way). I much prefer a clear principle with few or no exceptions.
I suppose there may be room for both. "The freedom to exchange ideas" is after all a subset of "freedom of expression" (though not necessarily of "freedom of expression, with a bunch of exceptions").
In your "ideas" phrasing the exceptions seem implicit rather than explicit by virtue of not covering everything. I think it's better to be explicit about "you can do whatever you want, except [..]".
That some people will try to abuse this seems inescapable no matter what; we'll still be argueing the details 200 or 2,000 years from now because there is no way to capture any of this in clear neat rules. The best we can do is come up with some decent set of ground rules which convey the intent and purpose as best as possible. This is why we have judges to, well, judge, and "reasonably harm people in a significant way" seems like a lot clearer of a guideline for this than a much more vague "ideas".
Flag burning wasn't protected as free speech in the US until 1989. I have a list of stuff that was banned or censored in the past that would be considered unobjectionable by almost everyone today, and I suspect things would have been better if we had "freedom of expression" instead of "freedom of speech" (or "free exchange of ideas", for that matter).
Fair point. I agree with the sentiment of "you can do whatever you want, except [..]", in the sense that I think we should err on the side of personal freedom. To be clear, I don't think focusing on "the free exchange of ideas" means other freedoms aren't important, and I'm not proposing a constitutional amendment or anything. It's just that from a rhetorical perspective I prefer to use terminology that encourages the strongest possible interpretation of the argument I'm making, and I think, for me at least, "the free exchange of ideas" does that best for all the reasons I named in my original comment and its replies.
> This is partly why I now prefer the term "free exchange of ideas" over "freedom of speech".
What do you need in order to have a free exchange of ideas? Oh that's right, free speech. Free exchange of ideas is one of the benefits of free speech. But it isn't free speech. Also what about speech to entertain? What about speech to criticize? What about speech to just mindlessly express yourself?
> like spam or antisocial behavior
What counts as antisocial behavior? Rap music? Heavy metal? Is protesting government antisocial behavior? What about criticizing politicians? And more importantly, who decides?
It's amazing how little people know about free speech. It should be mandatory to have a class on civics and of course free speech. People have such a childish and superficial understanding of free speech. And of course these people always tend to be for censorship.
To be clear, I'm not saying freedom of speech is a bad thing. On the contrary, I agree protecting speech is a great way to ensure the free exchange of ideas. (As you noted, if you can't speak your ideas then you can't freely exchange them.)
But there's a lot of "speech" that should be and is already prohibited, both legally (CP being an obvious example), or through social standards of conduct (screaming expletives at random passers by is liable to get you kicked out of just about any venue).
In my view, focusing on the goal of freedom of speech (which is to say, the free exchange of ideas) rather than on speech itself communicates much more clearly on why the principle is important and where the line is. It makes it obvious why CP is not legally protected but Nazism is. Both are despicable, but one needs to be protected in order to preserve the free exchange of ideas while the other doesn't.
And again, I'm not saying other forms of expression that don't convey ideas shouldn't be allowed; just that the reasons why they should or shouldn't be allowed are separate from the reasons why the free exchange of ideas is important.
Yes, this wording does narrow my argument somewhat. But I think it largely narrows it in ways that make it easier to defend rhetorically, especially when trying to apply the principle to non-government entities such as the large social media conglomerates that own our modern town square.
At the time reddit was not some unknown back corner of the internet and had already begun working with law enforcement to enforce anti-CSAM laws due to material being treaded in private messages and private subreddits. That it took a media exposure take down the specific subreddit indicated that it was likely on the legal side of the line, through it was going close enough to the line to make others uncomfortable. If the material was actually illegal, wouldn't it have made reddit the largest clear-net site containing CSAM? In such a case, I find it hard to conceive that media exposure and not legal actions is why led to it shutting down, and with no admins being arrested it seems the most reasonable assumption left is that it was on the technically legal side.
This would likely be like the use of underage subjects in nudist art, painted, drawn, or photographed. Such art is generally not considered pornographic and are legally protected, and some even displayed in museums and the like, yet websites will still ban the material to not have any relationship to it and to not have users trying to push the boundary. I'm speaking to the extent that rules are enforced, many websites have issues with enforcing rules in general due to the amount of user generated content being much larger than the amount of moderation available, so there will be something slipping through moderation from time to time.
Most states in the US could use a photo of a teenager in a bikini as enough 'evidence' to bring charges of Possession of Pornography involving a Juvenile, depending on what the actual photo depicts. Whether the contents of the image could be found to substantiate a conviction for the charge would be a trial/appeals issue. Nudity is not required for an image to be considered CSAM in most US states (or at the federal level), there are also Federal precedents that make cartoon depictions of under-aged characters count as CSAM.
I thought part of the reason the US is so blessed with a thriving startup community is the light-touch the law has towards its startups. For example; didn't the billion dollar acquisition YouTube gain a lot of its initial growth through copyright infringement?
They're the same thing and used interchangeably. The sexual abuse is from the sharing lascivious pictures of someone taken without their consent, with the additional context that children are not understood to be capable of consent for sharing lascivious pictures. It is not physical abuse.
Use of the images can be abusive even if the creation of them wasn't. Revenge porn is an obvious example. Even legally and consensually created images can be used to abuse someone.
I suspect there's a transformation through intent.
This does lead to poor policing (like the famous Google banning man for taking photo of his child's genitals to send to doctor story).
It appears that in present-day English people use the word 'abuse' to mean more than physical abuse. If you're a prescriptivist that may upset you, but isn't that the nature of being a prescriptivist? Does it perhaps beg the question "why be a prescriptivist?"? Aren't you, perhaps, literally standing against the world?
I want to reserve the phrase "child abuse" for real child abuse, so that it gets taken as seriously as possible and not diluted. I think that is a sufficiently good motivation to stand on this hill.
What would you call whatever that japanese underage comic thing is? It features no persons and no one was abused yet it still illegal. I don't object to the term as applied, but I don't particularly feel great about the word 'abuse' being used for cartoon characters since it degrades the experience of human survivors.
Not in the U.S., federally at least, where the justice department's guidance specifies that CP is media which "appear to depict an identifiable, actual minor."
Lawyers and courts say child pornography because laws are not restricted to sexual abuse material. Drawn child pornography is illegal in many places for example.
It's the only term I'd ever seen for it, in any context, until the last couple years. And I'm not a 4channer. Pretty sure it's just another euphemism-treadmill thing. (not that I mind in this case, I think the new term's fine, and certainly not worth fighting over)
What's "the industry"? The relatively new industry in moderating Internet communication? I've certainly only seen the term "CSAM" coming from cops and prosecutors very recently, and they're rarely shy about using their jargon in public communication.
[EDIT] I believe you that it's the term in vogue now, to be clear, but I'm skeptical it was as dominant, even if present, until relatively recently. If it was, then that entire world only recently started using it consistently in public communication, certainly. But, again, it's also fine, I don't mind the new term.
>And it would in turn be worth noting that the creators of reddit had a philosophical and political commitment to free speech that drove their light-touch approach to moderation
That's nonsense. The Sears debacle showed that reddits leadership team was fine with deleting posts if it was going to cost them money to not delete them.
That 'political commitment to free speech' sure disappeared quickly when r/jailbait and u/violetacrez hit the main stream media.
spez was fine with hosting a community of child predators because it was one of the most popular subs. It was the top recommended result when you searched for reddit on google.
You can support free speech without actively providing a community for predators
reddit used to be owned by Conde Nast. Sears got upset about a post and complained to Conde Nast, who then told spez to take it down. If you have a political commitment to 'free speech' that folds if you might have to face some consequences for defending it, you don't have that strong of a commitment in my opinion. Certainly not strong enough to justify hosting a community of child predators
Sears had an XSS injection issue, where you could change their breadcrumbs by manipulating the URLs. Some redditors changed and shared a link to a grill as a "Body part roaster" and had fun. Sears found out and got mad
Hi! I've done a bunch of trust and safety work and I see this trope a fair bit. Please help me understand what the difference is between, say, platforming racist harassment because of a "political commitment to free speech" and platforming racist harassment because you just kinda like racist harassment?
I get that it might be different in the heads of the people who have worked very hard to create those platforms. I'm just not seeing any different in its effects on the world or on the targets of the racial harassment.
> Please help me understand what the difference is between, say, platforming racist harassment because of a "political commitment to free speech" and platforming racist harassment because you just kinda like racist harassment?
The difference is intent. Intent matters. Intent is the difference between murder and manslaughter, or between a conspiracy and mere speech.
Intent matters sometimes. To some people. But here, in either case the intent is to enable terrible people to, e.g., shout the n-word at people. So I don't see much of a difference in those terms.
Well it obviously mattered enough for the Founding Fathers of the US to enshrine freedom of speech in the Bill of Rights, and in the last 200-ish years it also mattered enough for US courts not to overturn or politicians of various parties to change it.
Now where I'm from (Germany), not just "hate speech" is against the law, but it's also unlawful to insult another person. It's complicated, but for the latter it's mostly sufficient that they feel insulted by what you said to them personally.
Now while I don't go around insulting people in person or on the Internet, I personally think - for instance - that it should be allowed to call a person an asshole, if they behave like an asshole. Yet, if I did that here, or even online to another German person, they could go to the police and press charges. If the public prosecutor is sufficiently bored, this very low barrier could also be used to dox me in an otherwise reasonably anonymous setting, since the resulting lawsuit could result in my data getting subpoenaed from, say, Twitter and my ISP. This has happened to other people here in the past.
Now while I'm neither in favor of either hate speech nor randomly and viciously insulting people online, I consider the law in Germany as outlined unreasonable in an online setting. I think freedom of speech is more important fundamentally than another person's right to not feel hurt, or for some powers that be to silence or punish me because I said something inconvenient that they merely claim to meet some of the criteria for speech that is restricted here.
Mind you, this is the case all the while freedom of speech is enshrined in the German constitution as well. But I think it is a pretty good example of why I think freedom of speech should not be curtailed just in the name of another person's feelings about said speech. Even if a person, as you do, doesn't see a direct and tangible benefit in allowing that kind of speech, I would argue that a larger fraction of people are against disallowing it, because of the indirect consequences and where that line of lawmaking leads.
Another thing to consider is this: Say you're modestly happy with the current government wherever you live, and you'd be happy for them to have an "easy" way to curtail freedom of speech. Would you also be happy for the opposing political side to do the same thing? What if some extremists came to power?
This kind of reasoning is why free speech absolutists are so staunchly defending freedom of speech, even if it may be inconvenient or insulting to themselves or others.
You're conflating a lot of things here. One is the free exchange of ideas with freedom to harass people. Another is legal versus socially accepted. A third is the difference between "the cops should be able to arrest your for X" and "I am choosing to spend my days creating a platform for X". These are all importantly different.
You're also shading over exactly who gets free speech. If digital Klansman get to freely harass black people, many of those black people will not participate in public spaces, silencing them. Indeed, that sort of ethnic cleansing is often the goal of racial abuse. See, e.g., Loewen's "Sundown Towns". So whatever "free speech absolutists" think they're up to, in practice the result is often a diminishing of the free exchange of ideas that the Founding Fathers were clearly pursuing.
I don't think this is a very accurate description of how things work in Germany. It's exceedingly rare in any Western jurisdiction for the aggrieved party to press charges. This power is usually left to government prosecutors, who are probably more impartial than the complainant.
how about this example: twitter would silence and deplatform some guy using the oh so terrible "n-word" ("because sticks and stones may..." is not a thing anymore). Now because this person is deplatformed, I cannot find his "hatespeech" when doing a quick background check, so I hire him in my company as responsible for recruiting. Now he makes sure no "n-words" get employed.
big win? whomever votes to silence the guy gets to judge.
If the only possible way you can catch a bigoted manager is by hoping that he spent a lot of time hurling racial abuse at Black people under his own name, then I think you really need to work on your management processes.
sure, I find out about it a year later, since i am a small company, and now 3 black people were not hired because of it. Is this a big win? also, I do believe that the word abuse is being devalued, and that such simple insults do not really qualify as abuse in general, lets not forget that a great deal of black people love to use the same words when talking to eachother, black celebrities get famous singing the word, self describing as such.
That is a very white understanding of what abuse means.
Also, it seems wild to me that you think a small company means you somehow have less ability to supervise your employees. What's your plan if you hire a racist who wasn't dumb enough to post openly? Just let him go to it?
There is probably a line. But you don't know where it is and neither do I. You and I might agree that X is to one side of that line, but if we ban that behavior, then we have initiated a process that we might call line-discovery -- the search for the line that X was to one side of -- and line-discovery is highly prone to outcomes that result in bans on content from the other side of that line. So we don't want to engage in line-discovery, even though there are obvious examples of things to one or the other side of the line.
You may think you can ban the obvious things without ultimately engaging in line-discovery, but, the argument goes, you are mistaken. You will ultimately find yourself doing line-discovery.
You start out with obvious-sounding prohibitions on racism and hate speech, but eventually you're arguing about, say, whether it's racist to report on polling showing that violent protests are unpopular. [0]
And that's because banning any speech always leads to line-discovery.
So it comes down to a question of which scenario is worse:
A. You ban obviously bad stuff while accepting some risk of banning things that aren't actually over the line.
B. You privilege all content to avoid that outcome.
Some people are outraged by this framing and think it's obvious that you would want to risk banning some behavior to the right side of the line if it means eliminating the most obnoxious speech. But, basically, that is not obvious to everyone, no matter how many times they are reminded that there is some really bad stuff out there. [1]
[1] Interestingly, this is really not so different from the argument about evidentiary standards for punishing criminal behavior, except in that case the politics are flipped. There conservatives would rather risk punishing some innocent people if it means the absolute worst actors are guaranteed harsh punishment, but liberals think it's worth risking some amount of literal rape and murder in order to prevent punishing the innocent. So I think, actually, both sides are entirely capable of seeing this from the other side; they just don't want to.
Yes. I am addressing the second-order effects of each motivation.
Let's grant that the harms of the kind of speech you're worried about are exactly the same in either case. [0] Platforming "racist harassment" because of a political commitment to free speech implies that other forms of controversial speech will get the same treatment, preventing the kind of line-discovery I described in my previous comment.
"Platforming racist harassment because you just kinda like racist harassment" leads to who knows what. All we know about that person is that they like racist harassment. Maybe other stuff gets banned. Maybe not. Either way, it's unlikely to be in service of avoiding harmful second-order effects.
So that's an enormous difference between the two motivations. In the first case the position is in defense of an ethic of open dialogue and an attempt to prevent second-order effects that are harmful to that dialogue.
In the second case -- who knows.
It seems to me that the first motivation is much more likely to prevent the kinds of second-order effects I'm worried about and that distinguishes it from the second one.
Many people have said this better than me, but there are plenty of people who have thought they can do better than the current status-quo regarding user-generated content on the internet.
They end up conforming or losing money. There’s no one reason for this. You try to run a website visible to the world, you’re gonna be subject to a world full of reasons.
> the creators of reddit had a philosophical and political commitment to free speech that drove their light-touch approach to moderation
The notion that reddit ever was a bastion of free speech is absurd. They didn't "light touch" on upskirt, revenge, and kiddy porn because of "philosophical and political commitment to free speech", they did it because they didn't want to accept any more responsibility for content than they absolutely had to, and that's because it is not financially viable to moderate large communities using paid labor. That is why you see so many social media companies pushing against rules for online content; not because they're champions of free speech.
If it were about "a commitment to free speech", they wouldn't allow completely unaccountable and anonymous members to delete content, silently mute users, and ban users....including employing automatic tools that would ban people preemptively based on subreddits they posted in, or automated tools for powermods to ban someone across all the subreddits they moderated.
If you pissed off a powermod, your account could end up getting banned from nearly all the major, common subreddits - not just from theirs, but they'd communicate in private channels to other powermods that they wanted someone to be banned elsewhere.
Oh, and they were happy to moderate, severely, anyone who revealed any personal details about a reddit user. Which conveniently helps protect people doing stuff like upskirting and posting revenge porn.
"philosophical and political commitment to free speech", my ass.
>It's not like the existence of that subreddit is evidence of an endorsement on their part.
It is though. 230 be damned. These were not small or hidden communities. They were frequently on the front page. Generally, and especially in this case silence is violence. The optics of that sub and the frequenters thereof are terrible. Do you want to try and justify their inaction further or concede this point? It should have never been allowed in the first place. Spez/Reddit at al should continue to be shamed for their long-standing tacit approval of these communities. Earning respectability requires public contrition for bad decisions that affect the public and non participants. As is typical, the communities were only shuttered when the victims cries grew loud enough to affect their brand image. Cf fph, wpd, fappening, t_d, Boston bomber fiasco, all the racist subs, and countless other controversies that spez/Reddit fumbled. Reddit deserves to close. The management team is evidently not competent or mature enough for the task and has repeatedly proven that their inability to learn from their mistakes and failure to become the proactive stewards needed will result in preventable harm to people who do not even use the platform.
I mean, define "endorsement". Permitting something to exist when you have the power to do otherwise is a mild form of endorsement. A commitment to free speech is, to an extent, an endorsement of all the speech that results.
> A commitment to free speech is, to an extent, an endorsement of all the speech that results.
Absolutely not. I'd argue that anyone should be free to talk with others about their opinion, but that doesn't mean I agree with that opinion. And letting then speak without shutting them down doesn't mean I agree either, just means I agree that they should be able to speak freely.
What kind of dystopian viewpoint is that? You go around stopping everyone from saying stuff you disagree with?
Platforms like reddit are in no way similar to personal property like a house that you live in.
A better analogy would be, imagine you rent your house to someone else. You make a rule that tenants may display political messages in their windows, but only for one political party.
That would be illegal. You can prohibit all signs if you want, but specifically choosing what signs someone gets to display violates their first amendment rights and could trigger a fair housing lawsuit. It doesn't matter that you aren't the government and that you own the property.
The renting analogy fits even less though. Renters have protections against evictions that don't exist for websites. If I break the rules of my lease it would take a month or 2 minimum to get kicked out. If I break Reddit's rules I can get banned immediately.
> I'd argue that anyone should be free to talk with others about their opinion,
I _think_ I agree with that. Don't hold me to it, but it feels right.
> but that doesn't mean I agree with that opinion.
Yup, sure, agreed.
> And letting then speak without shutting them down doesn't mean I agree either, just means I agree that they should be able to speak freely.
There is a world of difference between "not actively preventing someone from speaking" and "setting up a system whereby someone's speech is enabled and broadcast". Casting this to the real-world - if someone's yelling their opinions on a street corner, and I simply walk by without stopping them, then no, that's not an endorsement. But if I notice them yelling, and walk up and hand them a microphone - or (more closely mirroring social media setups) I install a public-access microphone, and stand there observing who uses it without trying to control it - then yes, through inaction I have endorsed what they choose to do with it.
> What kind of dystopian viewpoint is that? You go around stopping everyone from saying stuff you disagree with?
In areas I control and am responsible for, yes. If a guest in my home started spewing (what I consider to be) unacceptable speech, then (depending on my history with and pre-existing respect for them), I'd either take them aside and ask them to reconsider their choices, or jump straight to asking them to leave.
Enabling and endorsement are two different things, no need to conflate the two. If I'm a dentist and tell my patient that they could use any toothpaste they want but that I don't recommend the specific brand that they use, how is that an endorsement in any way? I'm allowing them the choice without endorsing in this case.
In your analogy, the dentist is selling the bad toothpaste and saying "Go pick any from the shelf over there but not that one". Why is he selling it then? The dentist can't say "well it's a free market" as if that somehow absolves him. He sells the bad toothpaste, that's a tacit endorsement.
You can be a proponent of free speech and not allow people to stand on your porch yelling heinous things, but that's not what Reddit was doing. They knowingly profited from that speech.
The context is discussion of social media platforms where the platform already owns all the content and has the tools to decide what gets published and what doesn't.
It's not just that he allowed them to exist, he created a special one-of-a-kind "Pimp Daddy" trophy to award to the moderator of r/jailbait and r/creepshots.
There is some whitewash in the comments there: "[violentacrez] received the trophy because all the work he did to moderate the site..." as if he got the award for keeping things clean, but consider that he contributed the vast majority of those subreddits' content himself by cruising social media for salacious pictures of minors to share while he was in his 40's, and the award is named "Pimp Daddy."
IIRC Violentacerz modded like 50 different porn subreddits, and he did a goob job by moderation standards so he was appreciated by the admins for being the overseer of the porny side of reddit.
Sure, and one can absolutely criticize him for that, but I think if one wants to criticize how /r/jailbait and similar subs were handled it's better to do that directly rather than making a more nebulous insinuation that stands on weaker ground.
Anyone trying to lose weight and skip a meal today? Open a reddit client and do a subreddit search with "teen" as a prefix. Or don't, you know. Silly me for assuming that this had been solved after any of the various sketchy-porn related subreddit purges.
I'm not naive, user content is hard to moderate. But is it hard to say "any subreddits with these keywords go on a list for review"?
I added that invitation flow in response to widespread abuse of the 1-way add moderator button. As an aside, the invitation message felt too bland and clinical during dev, so I added "gadzooks!" to the beginning, which became a meme for a while.
It seems like a funny distinction to make in the first place. Was he an admin of the site when it was hosting that sort of stuff? Anyone who was an admin at the time is responsible for the policies that allowed it to exist on the site…
He was also part of the team that gave the owner of the sub awards. It's not really credible to claim that spez wasn't somehow very enthusiastic about it.
More seriously, you really start to feel "ancient" when your body goes from "new year, pretty much the same as old year" ... to ... "new year, who removed vital organs and bodily fluids while I slept last night?! The bastards!!!"
Of course, it is a classic trope nevertheless ... and I did start to feel ancient even 15+ years ago ... but once you start noticing real changes, then you REALLY get that feel (and you know more is coming, lol): https://youtu.be/MqBNSMbEzI0
>Back in the day, you used to be able to add anyone as a moderator and it auto accepted.
>People would make shitty subs and add people, take a screenshot, shut down the sun or make private, then use that screenshot to start a witch hunt. Violentacrez could have added you as a mod of the sub and you'd be in the same situation.
>TL:DR I used to mod a sub with Barack Obama and Snoop Dogg.
Consider: spez was the voluntary "mod" of the entire platform as CEO, and he maintained that subreddit.
Excusing him for the unsolicited mod invite is just optics management. It would be like saying lowtax had no responsibility as co-signer for the existence of subforums that his paid or unpaid staff maintained.
Back when Obama hosted an AMA on Reddit, a bunch of users added his account as a moderator to a bunch of subreddits, including some pretty objectionable ones. This prompted a change that moderators would be invited instead.
Doesn't seem that unbelievable when you look into some of the other stuff he's done. For example he secretly used his admin powers to edit user comments from users he didn't like or who criticized him.
I believe the issue the commenter above was taking was that just because someone commits, for lack of a better term, comment fraud, we shouldn't jump to suggesting he's also a paedophile.
Oh, i took that comment to suggest that the downtime could (mostly jokingly, i assume) have something to do with Spez dealing with a post he didn't like.
This is something I’ve seen repeated on Reddit often. Virtually any meta thread on Reddit about Reddit will have several comments containing these allegations. I can’t imagine how much time someone might need to spend to dig through all the noise to get to the truth, but it rings of something that might have a kernel of truth, given the prevalence and uniformity of such accusations.
In either case, it’s easily as speculative as the parent comment above, maybe slightly more so, since the parent came from Twitter.
>“Yep. I messed with the “fuck u/spez” comments, replacing “spez” with r/the_donald mods for about an hour,” Huffman, who co-founded Reddit with Alexis Ohanian in 2005, wrote.
He did not admit to it until evidence was compiled and hit the front page, and let it appear for part of the week to participants and onlookers as if there was massive internal strife.
It is really really wild for other comments to try to pretend the comment above is about something other than the admin (spez) using his powers to edit reddit comments of people he didn't like or political opponents.
This isn't that surprising, you used to be able to add anyone on the entire site as a moderator and it'd autoaccept. It's doubtful he actually moderated it in any capacity. He's still moderator of some random subreddits.
What's funny about this to me is that the actual moderators of r/jailbait thought "I know how I can insult u/spez, I'll make him a moderator of my sub, so he'll look like a scumbag, like I am"
I'm not sure if this is true, but if I were a creator and admin of a site, I'd assume I'm automatically a mod of every subreddit or subforum. It doesn't necessarily mean spez was specifically moderating that sub.
Part of spez's job is to be the lightning rod for controversial decisions. The board I'm sure is pushing for the same things (increases in pricing, driving users to the official app) in order to boost metrics before the IPO. Aside from the somewhat pointless AMA where his frustrations came out a bit too much, if you assume that the effective removal of API access had to happen, what do you think he's done wrong during this?
Many take exception to his handling of Reddit’s relationship with Apollo and Christian Selig specifically.
Steve Huffman has reportedly told employees that Selig threatened Reddit. Selig posted a (perfectly legally recorded and disclosed) call recording showing the alleged “threat” was a misunderstanding over which the Reddit employee on the call apologized immediately.
Huffman serves in an official capacity at the Anti-Defamation League. People are (rightly, I think) critical of his handling partly in light of that.
Maybe he knows where all the skeletons are buried, so the board removing him would be to difficult (or they're not ready to pay his golden parachute/buy his silence)
Back in the day you could create a subreddit and invite anyone to mod it. The invite would be automatically approved. I suspect this is what happened here.
He probably alludes to the common conservative trope that jews/blacks/environmentalists/gays/trans people are 'pedophiles' (depending on which era you look at)
Pedophilia is coming en vogue on the left. If you can't see it now, you'll see it soon. Spez will likely claim that he really was a /r/jailbait moderator for leftist credibility, I would guess 3-5 years from now.