Hacker News new | past | comments | ask | show | jobs | submit login
Trust-based moderation systems (cblgh.org)
106 points by KoftaBob on Dec 10, 2023 | hide | past | favorite | 79 comments



As a former moderator of some relatively large channels | forums semi frequently subjected to both edge lord raids and the glacial perverse machinations of the slow troll I have one question:

    To find the most trusted peers, we use Appleseed, a peer-reviewed algorithm and trust metric which was proposed in the mid 2000s by Cai-Nicolas Ziegler and Georg Lausen from the University of Freiburg. Appleseed operates on precisely the kind of weighted graph structure we have described, and it is also guaranteed to converge after a variable (but finite) number of iterations. Appleseed, once converged, produces a ranking of the most trusted peers.

    Appleseed produces its rankings by effectively releasing a predefined amount of energy at the trust source, or starting node, and letting that energy flow through the trust relations. The energy pools up in the nodes of the graph, with more energy remaining with the more trusted peers. After the computation has converged, each peer has captured a portion of the initial energy, where the peer with the most energy is regarded as the most trusted.
Now that this mechanism is out in the open, how robust is it in the face of a determined attempt to deliberate game the system?

ie: Can I become the most trusted for that one glorious moment of ripping the table cloth out from under everybody?


This "peer reviewed algorithm" is basically PageRank, or close enough for the following critique to apply: None of these algorithms are models that infer trust from raw content, aka there is no special sauce for trust here, it's just an information theory problem in the abstract. So Garbage In, Garbage Out. These algorithms are only good when you have a single, consistent signal of trust that you can trust. So it will have the same issues that Google has with SEO spam, and there's no graph-traversing algorithm to solve that, as it is an out-of-band issue compared to what these algorithms are designed to take as input. Take outgoing links to be your signal of trust, and you'll create a linkspam problem. So you need an anti-spam model that sits in front of this algorithm to detect violations of webmaster guidelines, aka gaming the system.

So in other words, all you need to do to game the system is generate a bunch of accounts that publicly link to you, follow you, or whatever this systems' input of 'User A trusts User B' is. And not get caught by whatever anti-spam or anti-bot model they're using.


Which is why we need Proof of Identity based internet infrastructure asap to verify people are real behind the avatar.


Won't stop the problem. There is an inexhaustible supply of real people who are willing to help game any system for ideology or entertainment.


It's trivial to store a ban list of several billion User IDs in memory. That's only a few tens of GB.

As long as it can be guaranteed that one specific individual can only ever create one account, it's not a problem at all.


In many countries there is already a market for buying and selling real world identities of penniless students and homeless alcoholics to use for nefarious ends, e.g. financial fraud or registering shell companies. Given enough financial incentive, Proof of Identity is insufficient.


Proof of identity with a social network backing of people vouching for each other then. As long as ids are limited we can track and assign trust in a secondary market. Anyone pulling stunts like you mention is just selling out their credit for a one-time small gain. Of course that will happen still, but it can be disincentivized financially - and has been in irl societies less desperate or with more attached to their social credit


> Proof of identity with a social network backing of people vouching for each other then

Which would seriously reduce the number of people willing or able to use the web.


You can wrap those identities behind anonymous usernames. The point is just to be able to link 1 identity per person.


Let’s raise the stakes even further than discussion forums. Search for “eve online betrayal” or “eve online heist” and determine if this algorithm could’ve prevented those situations.

Then raise the stakes more and consider the US presidential candidate nominations process.

Push it to extreme situations, and finally in the end compare it to the Chinese social scoring system.

This kind of journey of morals is what ultimately led to me transferring ownership of my own online community and walking away for good. It’s tough and I don’t envy the folks who keep up with these things.


> Then raise the stakes more and consider the US presidential candidate nominations process.

If there was any trust left here, the upcoming election cycle should have already destroyed it. The likely Republican candidate refuses to debate, the Democrats all seem to think the president shouldn't run again but won't stop him or allow a challenger, in all likelihood the two final candidates also won't debate each other, and we're left picking between two candidates that much of the country likely aren't happy with. That's neither democratic or trustworthy.


Public consensus on Israel-Palestine ceasefire vs the stance of their "democratically" elected, to enact "the will of the people" politicians also reveals the the theatrical aspect as well, even without adjusting for the massive home field propaganda advantage.

How much longer they can keep this illusion running has got to be THE most interesting thing happening at the moment if you ask me.


To be fair, the UN itself is revealing its own theatrical aspects. What would the ceasefire have done at the end of the day? Would the UN kick Israel out if they continue the war despite a UNSC vote? Would the UN somehow force Israel to stop fighting? Would countries go to war with Israel specifically to uphold the UNSC vote?

When push comes to shove the UN has very little power here and is at risk of going the way of the League of Nations if it pushes too hard without having any meaningful enforcement mechanisms.


> Would the UN kick Israel out if they continue the war despite a UNSC vote?

Unlikely. Is there even a process for the UN to kick members out? What good would it do anyway? At most, they get demoted to a non-voting member, but probably the US and UK won't both vote against them in the security council, so do they really need a general assembly vote?

> Would countries go to war with Israel specifically to uphold the UNSC vote?

No, but they might enact a no-fly zone, or do a peace keeping mission with lots of possible rules of engagement.

But the UN is really a means for the powerful countries to justify their extraterritorial actions, when there's enough concensus and the other powers don't care enough to turn away the rubber stamp. It does some other important stuff, but if the UN had significant power in the Israel/Palestine conflict, there's a series of adopted resolutions that could have been de facto enforced.


> No, but they might enact a no-fly zone, or do a peace keeping mission with lots of possible rules of engagement

I'm not sure how that would work here. Given that they are primarily Israeli jets in the air, UN member states would need to be willing to shoot down Israeli jets to enforce the no-fly zone.

> if the UN had significant power in the Israel/Palestine conflict, there's a series of adopted resolutions that could have been de facto enforced

Now you got me curious. What are those existing resolutions they could reach for? Is it mainly just enforcement related to economic sanctions and similar non-military actions?


> I'm not sure how that would work here. Given that they are primarily Israeli jets in the air, UN member states would need to be willing to shoot down Israeli jets to enforce the no-fly zone.

Sure, but enforcing a no-fly zone isn't going to war. And I'm sure they'd shoot down Hamas jets as well, so it's fair.

> What are those existing resolutions they could reach for?

There's a big list of resolutions [1], resolution 54 [2] from the security council in 1948 ordering everyone to desist from further military action is probably technically still in force and applicable.

In the general assembly, you've got resolution 181 [3] which sets a partition plan for Mandatory Palestine of 1947 which didn't happen. And there's many years of annual calls for Israel to withdrawl from the occupied territories, etc, between about 1967 and about 1984. Unfortunately, many of the Wikipedia links to general council resolutions don't currently work, so I'm relying on titles.

Anyway, if the UN orders a partition in 1947, and it hasn't happened by 2023, but they still say oh hey, do this thing... either the UN doesn't have much power or it isn't willing to use it.

[1] https://en.m.wikipedia.org/wiki/List_of_United_Nations_resol...

[2] https://en.m.wikipedia.org/wiki/United_Nations_Security_Coun...

[3] https://en.m.wikipedia.org/wiki/United_Nations_Partition_Pla...


> What would the ceasefire have done at the end of the day?

It would better prove out that the UN is also a psychological operation to a non-trivial degree.

> Would the UN kick Israel out if they continue the war despite a UNSC vote?

The US is the military muscle of the UN, and as far as I can tell, Israel largely controls the US, at least with respect to Israel. Who Epstein worked for, and who has has on tape seems like a relevant aspect of this issue, that hasn't gotten much traction even on TikTok.

> When push comes to shove the UN has very little power here...

The UN, or at least the idea of the UN that has been distributed among the minds of people in Western countries, has massive power. Control people's beliefs and you can control the world, and the US and Israel are both masters at that game. For evidence, I present the conversations that take place on social media, including HN.

> ...and is at risk of going the way of the League of Nations if it pushes too hard without having any meaningful enforcement mechanisms.

Dare to dream! I can't think of any "reasonably" likely scenario how that could come about, but maybe I lack imagination.

I do however think the decades long psy op that Israel and the US have had going in this specific region is at serious risk though. However, I do not underestimate the "Public Relations / Journalism / etc" skills of who we're dealing with - I think one well designed "event" could easily put most people right back into their trance.


> Dare to dream! I can't think of any "reasonably" likely scenario how that could come about, but maybe I lack imagination.

Its definitely unlikely in the near future, though it will die eventually and likely will happen quickly. I don't see many ways that happens soon, though depending how serious UN member states actually consider the war in Gaza, they could walk away seeing either how useless the UN is when one state can veto or when it becomes clear that a UN without teeth is only useful during times of peace.


I think one overlooked aspect of the UN is that it’s a way to gather a lot of diplomacy in one place. A constant mingling opportunity, if you will. As ugly as it is, still seems valuable to me. Therefore I don’t think it will be easily abolished.


Its one of those things with history that I always wonder about. Was a similar sentiment common shortly before the League of Nations fell apart?

I also don't expect the UN to dissolve anytime soon, but I can't really put any reasoning behind that other than it feeling crazy to think the UN could be gone tomorrow or 6 months from now.

Presumably, if enough nations take real issue with Israel's war they could get frustrated enough with political roadblocks and general lack of accountability and enforcement that they decide to walk away. If a few walk it may not take much for it to quietly slip away.


Both parties are still running primaries. None of the races have happened yet so whether Biden or Trump get the nomination isn’t set in stone. In addition to the two major parties, there are several independent and third parties. Because they are not popular enough to win isn’t a condemnation of democracy. That’s how it works.


That's not how it works in a first-past-the-post voting system!

Third parties can not gain traction. (Hence all the effort spent on co-opting and riding one of the existing parties like a zombie.)


This.

Third parties effectively exists as free research for what positions will be popular next cycle


Well at one point the Republicans were a third party. Yes First past the post is terrible, but to say your candidate can’t win without getting the most votes is how nearly all election systems work.

A lot needs to be reformed with regards to voting and election systems in the US but to claim it is impossible for a third party to win is factually incorrect. How else would independent politicians like Bernie Sanders or third party candidates like Abraham Lincoln would have ever been elected?


Agreed that third parties are possible, though it is still worth noting that a third party win would almost certainly result in an existing party being replaced for a different two party system.

The parties in the US have changed around something like 5 or 6 times. In the last 2 or 3 changes the party names didn't change but the parties did fundamentally change. After each shakeup, we've ended up with a two party system that's structurally unworkable for independents to win at the highest level.


That's not how nearly all election systems work.

Who wins is often not determined on election day, but in coalition making after the election. If you have 5 - 10 parties, it's often not immediately apparent on election night who can gather a majority behind them.


The Republicans are running a primary, though the candidate with by far the most support isn't taking part. Trump will be on the primary ballot and always loves hearing himself talk on an empty stage, but he's avoided debates entirely. That isn't the fault of voters or the party necessarily, though the party can and possibly should require candidates to take part in the primary process if they want to be on the party's ballot.

The Democrats effectively aren't running a primary at all. There are technically two challengers[1], though there are no debates planned [2]. RFK was running on the ticket and had plenty of support in polls to warrant primary debates, though he's now running as an independent after what he viewed as a refusal by the party to allow a primary process to challenge Biden (his opinion, I can't vouch for inner workings of the DNC).

[1] https://www.reuters.com/world/us/democratic-candidates-runni...

[2] https://deadline.com/feature/2024-presidential-debates-sched...


> RFK was running on the ticket

As a non-American, who's RFK? Not the long-dead Robert F Kennedy, of course, but my ability to search that acronym pretty much only brings him up


Oh sorry, yeah I shouldn't have gone with initials there! Yes that was referencing Robert F Kennedy


Oh! I must've really misunderstood your comment, my apologies, I thought you were talking about someone today!


Well I'm failing at writing clear messages today. I was referencing a candidate today, Rover F Kennedy Jr

Honestly I never really think to refer to him as a Junior, his dad was assassinated shortly before I was born, but I should have not only avoided using initials I should have also included the "Jr"


You can also look at the /r/Canada takeover, where a small group of people pretended for YEARS to be something they were not till they were able to seat, and subsequently stage a coup on the sub.


Do you have any links to writeups about this? I haven’t heard of the /r/Canada coup


Why would you want an algorithm to prevent those situations; aren't betrayals and heists what make eve fun over time?


That's the right question, and what my work a quarter century ago on "attack resistant trust metrics" attempted to answer. That's written up in a bunch of places, but I think probably the best starting point is my "lessons from Advogato"[1] talk in 2007.

[1]: https://www.youtube.com/watch?v=d9gjy1OsTGQ


I’m not sure that any measure of trust could prevent someone from building up trust and then ripping the table cloth. You can only measure what you can observe, and short of something weird like a psychological profile I don’t think this could be in the scope of a trust metric.

The only way I can think of to try to avoid this is to simply make it more lucrative to continue with good behavior rather than liquidate the trust you’ve built up.


Yes.

Also, respond very quickly to recent trust issues, but slowly to long term trust issues. So any trust violation gets punished quickly & heavily, but has little lasting effect if turns out to be an isolated issue.

It’s an interesting problem. I don’t see how we can keep scaling and increasing the value of social connections (high trust in useful networks has almost unlimited potential vslue) without automating & decentralizing the quality maintenance.


I don't think it works as long as people are fully anonymous. There's gonna a way to amplify yourself with fake accounts. The way a social network should work is, you only directly communicate with people you know IRL*, but it's easy to spread info across multiple hops. Each person along the way is self-moderating because they won't be scamming or harassing their real acquaintances. That's why OG Facebook without groups, pages, global recommendations, etc was actually very civil.

Of course, that's not doable in a typical chat room setting.

* Profiles don't have to be tied to real identities like on Facebook. Screen-names are fine as long as people know who's who, like on AIM.


> Each person along the way is self-moderating because they won't be scamming or harassing their real acquaintances.

LMAO. Bernie Madoff got started by roping in friends and family. Friends and family are literally the biggest targets in every MLM scheme. Family is frequently implicated in identity theft (they have access to your files). Gen-Z has figured out that they can scam their families and cry gay persecution from them to avoid accountability. The vast majority of rape cases (70-90%) are acquaintance rape.

Real acquaintances are the most likely to be victimized in literally every form of crime, including murder. Trusted networks only function when people have decency and don't exploit their insider access.

> That's why OG Facebook without groups, pages, global recommendations, etc was actually very civil.

I'd argue it's because the networks were smaller, and no other reason. When you have a large family, you can afford to fuck over a few members and live with never being invited to Thanksgiving at their house again.


The stats would be different if people trusted strangers. Some people are awful to their acquaintances too, but they'll be that way with or without a social network. Can't really do anything about people abusing hard-earned trust.

Large network isn't an issue in of itself. Most people indirectly know tons of people. Overly connected is the issue, cause it incentivizes bad things for popularity. Nobody directly knows like 1M people IRL. Even worse if there are no real-life consequences. Maybe we're saying the same thing.


> The stats would be different if people trusted strangers

That's an interesting angle. Trust is the fine line between friend and stranger though. The stats would be different because "acquaintance" no longer has meaning.

> Overly connected is the issue, cause it incentivizes bad things for popularity.

Yeah, I thought about it some more and shit really went downhill once FB replaced the timeline with the algorithm. That surfaced the worst behavior for attention. You're right about extended networks.


https://ncase.me/crowds/ is what got me thinking about this originally, supposedly based on prior research. It's a bit long and dramatic, so to summarize... They mathematically modeled complex ideas spreading only if they pass a threshold, % of people you know who hold them. Too many connections creates a noise problem for complex ideas, but allows simple ideas to thrive. Simple ideas aren't always bad, but they tend to be, like rage. Too few is also an issue, so there's a sweet spot.

Obviously not proof of anything cause it's only a model, but I agree with the theory. Yes FB did go down the toilet with the algo feed, maybe Twitter too. So this is how I judge social networks now.


I like it, how could one make it happen?


I made my own social+news app following this model, but it's nearly impossible to market any kind of social media. Field is too played out. I gave up.

Old Facebook already did it. Twitter could be it if there are no recommended / globally popular accounts and you just repost stuff you like. Both found it more profitable to shove stuff at people and create popularity contests.


I've never moderated a forum - what are 'edge lord raids' and 'slow trolls'? I'm sure I've come across them in reddit for example, but I'm not familiar with the terms and googling only gets D&D and fishing references.


"Edge lords" are those that for whatever reason like to make uncomfortable contraversial remarks .. not for the genuine sake of discussing things that should be discussed in a measured way but for the goal of provoking drama, over reaction, chaos, confusion and distraction.

The appearence of multiple relatively new users all pushing extreme takes that are on the edge or well over gets shorthanded as an edge lord raid - it's coordinated and can have multiple root causes from simple social drama to actual distraction with an ulterior motive (eg: prompting moderators into rushed counter reaction can expose passwords | permissions | inter operator side channel communications, etc).

Slow trolls like the drama but dislike the boot .. they'll wheedle their way into communities and build a support network then pivot into being "that uncle at thanksgiving" .. you know the one.

These are very rough descriptions, it's very much the case that you might not be able to pin particular behavious down but you'll know 'em when you see 'em.


Yeah systems like this essentially just centralize power. For a failure case, see spam IP address blacklists. I think generally it's not a good model.


The usual answer: The “Why your anti-spam idea won’t work” checklist. [1]

If you can create identities at zero or modest cost, no majority-vote scheme will work.

Amusingly, what does work is Second Life. Space keeps everything from being in the same place. You can shout at most 100 meters, and the 3D world is the size of Los Angeles. There's no broadcast system built in. Jerks are a local problem. Local landowners can kick people off their land. Spam consists of buying small land parcels and putting up billboards, and is rarely profitable. Influencers have small circles of influence. Everything is local.

If it's hard for one person to reach large numbers of people at low cost, moderation becomes far less of a problem. This is alien to the concept of social networks of course. It does raise the question, do you need to give everybody a bullhorn?

[1] https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...


> Local landowners can kick people off their land.

Sounds like subreddit moderators to me? Still builds on human labour, this time unpaid and from community members.


This sounds a lot like the original pitch for Urbit.


Transitivity of trust is controversial, so I am sceptical of systems that aim to unburden most participants from having to manage individual trust relations and emerging a set of leader/deciders and a larger passive group of followers. Not sure how Appleseed or the proposed TrustNet overlay solves this.

Also I don't think there's much hope for one-size-fits-all solutions to trust tracking. Some applications are slow and iterative, like the evolution of reputation in communities, and they must allow for redemption. Others are critical, where even a single, brief defection would be a disaster. I guess this one is aimed at social media chat.


I always thought slashdot’s community moderation and meta-moderation was excellent. I always thought it curious that nobody copied it.

Of course, dang based moderation also works well but you need a dang for that.


Let’s hope human cloning will arrive within dang‘s lifetime.


A sort of Dang Acevedo? :)


Meh, fine-tune a LLM on dang's comments and call it a day. Ship early and ship often, adjustments can be made as we discover it doing the wrong thing.


Funny, but taking it seriously: it should be trained on dang's moderation actions, not comments.


Most of dang's comments (that I've came across at least) are moderation actions, like telling people how they're not following the guidelines and so on. But yeah, also the actual backend moderation actions should obviously be included in the training set.


Excellent at producing a specific outcome (optimizing for certain variables) - whether that specific goal of an outcome is anywhere near optimal in a comprehensive sense is another matter entirely.


Excellent at allowing the same 10 guys (and their alts) to dominate almost every discussion. For truly serious topics, this place isn't all that different. Manipulation is a powerful weapon.


I've never seen this to be the case. Could you elaborate on how dang does this?


Metafilter also seems to work fine. Just charge money for each account creation, eventually the repeat trolls will get tired of paying over and over again.


As far as I understand, every user must know the trust network of all other users, which 1) doesn't scale much 2) has terrible privacy implications.


It’s needs a republic structure. Every person joins a small default group of random strangers, their moderation group. Small enough that bad actors can be fished out. This is part of a larger group where these individual silos get their own trust relationship. If a small group starts misbehaving, the individuals in that group get reassigned. If the individuals who have moved also correlate with a lack of trust in their next group, they get flagged and put on probation. We can’t have global trust until local trust has been established.


Problem (1) can likely be addressed by restricting the number of iterations to some small constant, like 3 to 6 (which they suggest in the article). You can also restrict the number of peers-of-peers you fetch during each iteration to a small random subset. So if we only iterate 6 times and choose 10 peers for each iterations, we’ll get 1 million (10^6) trust scores needing to be pulled to calculate a trust score. That is an upper bound and will likely be less because it assumes each peer is distinct. At 32-bit floating points, that’s 32 million bits or 4 MB necessary to be fetched.

I can imagine this being reduced by at least a factor of 10 without much impact on the trust score. But note the “random subset” means different people will have different trust scores for the same peer :shrug:

For (2) yeah we probably need to lower our expectations on privacy for the time being; it’s a masters thesis and privacy in open distributed systems is very tricky.


> For (2) yeah we probably need to lower our expectations on privacy for the time being; it’s a masters thesis and privacy in open distributed systems is very tricky.

If we do care about privacy, any moderation system should first be designed to meet that goal. Its not worth designing a moderation system if we don't first know it will work with one of the core requirements.


Slightly off topic:

I am trying to innovate on moderation systems and I run/code a whitelist moderated forum [0]. You can only see posts and comments from users that you follow. It's a very simple system and there really aren't any gaming vectors. One implication is that if a new user signs up and posts, no one will see it unless they follow. I've actually never used any typical censorship moderation.

[0] https://www.commentcastles.org


> You can only see posts and comments from users that you follow.

I don't get it. How do you even find users to follow if you can't see their posts or comments?


I just signed up. The front page seems to show lots of posts though I haven't followed anyone yet. Do most users avoid the front page?

(I think your comment is on topic for a post about a moderation system.)


“How do you remove malicious participants from a chat?”

You can't. Inevitably, the forum is slowly taken over by some self appointed dictator and cohorts and the more saner voiced are driven out.


Appleseed sounds a lot like PageRank. Is it? The link for it returns 404. It looks like this[1] is the original paper. It does cite the PageRank paper...

[1] https://link.springer.com/article/10.1007/s10796-005-4807-3


A central problem in all online communities that this post doesn't address is the definition of malicious behavior. E.g. a forum run by the marketing division of Coca-Cola might define comments on the negative health effects of soda consumption or just how great Pepsi is as malicious behavior.

Explicit definitions of malicious behavior in the forum guidelines may or may not be enforced if the forum is controlled by interests seeking to covertly amplify certain narratives while suppressing others, even if those narratives do not explicitly conflict with site guidelines.

One plausible approach to this situation is to use a LLM agent as the forum moderator - one which only uses a publicly-available explicit set of moderation rules to flag comments and submissions. Something like this is almost certainly being used at Youtube, X, etc., with the caveat that the rules being used are mostly hidden from the public (e.g. X feeds don't seem to have much interest in amplifying stories about UAW's efforts to unionize Tesla, etc.).

This could lead to a regulatory approach to social media in which the moderation rules being fed to the LLM must be made publicly available.


Tack, Alexander! I'll add your thesis to my library of papers & books on online trust.


is that list somewhere available to view?


Why would you trust a list of papers and books on online trust from some rando online?


I'm interested in the topic, and its just a collection of some interesting info that this chap has found. Trusting the information has nothing to do with it - I can discern whether the info is valuable for myself.


Is there any way to express distrust? This seems like level 0 of moderation, way to "report" bad behavior.

It seems here you can only "trust" someone into being a moderator, and then they have to do this part.


It's mentioned in the post link, but I suspect in practice distrust is less useful than you'd think as long as fresh identities are free. You can't punish someone without any "skin in the game". Centralized systems have an advantage here: they can refuse to issue new accounts or make it cost money to register, which puts some cost on spam. Distributed systems can be spammed and sock-puppeted for free.

In practice most central systems don't explicitly charge money for accounts, but instead require verification of something that would make it inconvenient to register large numbers of accounts all at once. For example, if you want a Gmail account, you need to verify your phone number with SMS. Phone numbers cost money to obtain, which means that you can distrust ones used to create spam accounts and the spammers actually lose something.

This is also why Fediverse moderation puts so much emphasis on defederating instances rather than banning individual accounts. In the Identica/OStatus era of the Fediverse, defederation was actually very controversial! But here in the Mastodon era, the only way to actually punish bad instance operators (and there are plenty of them) is to defederate their instance. This works because instances are referred to by domain name, and DNS is a centralized[0] system that costs money to register, so you can distrust a domain and actually cost the abuser money.

[0] The distribution of domain records is decentralized, and you can delegate subdomains forever, but you have to have a chain of delegation leading back to the root servers. Top level delegations cost lots of money, second-level delegations less so, and subdomain delegations are basically not worth anything and can be distrusted with wildcards on the first private zone in the domain (e.g. ban .evil.co.uk, .evil.net, etc).


I'm not sure that the general population would understand three different similar actions of trust, when the difference in effect to them personally is 0.


"Trickle down moderation"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: