Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can't help but think back to W. Edwards Deming's distinction between after-the-fact efforts to "inspect" quality into the process -- as opposed to before-the-fact efforts to build quality into the process.

OP offers a first-rate review (strategy + tactics!) for the inspection approach.

But, the unspoken alternative is to rethink the on-ramp to content-creation privileges, so that only people with net-positive value to the community get in. That surely means a more detailed registration and vetting process. Plus perhaps some way of insisting on real names and validating them.

I can see why MVPs skip this step. And why venture firms still embrace some version of "move fast and break things," even if we keep learning the consequences after the IPO.

But sites (mostly government or non-profit) that want to serve a single community quite vigilantly, without maximizing for early growth, do offer another path.



Absolutely this. Don't build the ship and then run around plugging leaks --- plan out the ship well enough to prevent leaks in the first place.

This is hard, and rare, because it requires predicting how all sorts of different people are going to interact with the community. Traditionally, this hasn't been something that the people who start software companies are particularly interested in, or good at. And a laser focus on user growth only compounds the problem.


Maybe when the Internet was new. But whether you count the Internet's birth in the 1980's with the original cross-content and cross-country links, or around the first dot com boom and bust in 2001, or with the iPhone in 2007, we know how "the Internet" is going to interact with "the community". We knew this back in 2016 when Microsoft released their "AI" chatbot to Twitter, and Twitter taught it to be a racist asshole in less than 24 hours† and the Internet, collectively said duh. Of course that was going to happen.

Anyone who's started a new community these days knows they have to start with a sort of code of conduct. That's non-negotiable these days. Would it be better if platforms like Discord did more to address the issue? Absolutely.

You're totally right it isn't easy - but the Internet's a few decades old by now and we know what's going to happen to your warm cosy website that allows commenting. The instant the trolls find it, you either die an MVP or live long enough to build content moderation.

†: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...


If your blog required a “real ID” to post content rather than allowed anonymous comments, would we have the same problem? The premise of the GP (and one I share) is that the internet’s content moderation problems are symptoms of default anonymity. Twitter is default anonymous so nobody’s reputation is a stake when they teach a neural net hooked up to the twitter firehose to be a racist asshole.


Facebook comments can still be a tire fire. It's not the anonymity, it's the lack of consequences.


In my experience, Facebook comment threads (while still awful at times) are very different from e.g. YouTube or Tumbler or Twitter comments. Sure they still devolve and become a mess sometimes, but from what I remember there was noticeably less "hard" trolling (i.e. 4chan style derogatory) on Facebook. People still light troll Facebook but not so much in the explicitly derogatory and assholish ways done in communities where expendable identities exist. In any event, because people use real IDs on Facebook, we can and do impose consequences. Remember when everybody used their first name middle name so jobs wouldn't see their underage party pics with substance use?... And then when everyones parents and grandparents joined people just stopped posting that stuff altogether and facebook "grew up".


In this case, a mix of both is required. While you absolutely must plan ahead and implement as many safeguards as you can prior to launch, that's simply the beginning, and it is incredibly naive to think that all the leaks can be prevented. (Or, honestly, that really any aspect of a community can be perfectly master-planned in advance.) To operate anything like a UGC platform is to be eternally engaged in a battle against ever-evolving and increasingly clever methods someone will come up with to exploit, sabotage, or otherwise harm your platform.

This is totally fine -- you just need to acknowledge this and try not to drop the ball when things seem like they're running smoothly. Employing every tactic at your disposal from the very beginning should be viewed as a prerequisite, one that will start you off in a strong position and able to evolve without first having to play catch-up.


That's strongly parallel to Gall's Law:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

https://en.wikipedia.org/wiki/John_Gall_(author)#Gall.27s_la...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: