Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Matrix network planning a distributed approach to online moderation (element.io)
26 points by dexwell on March 14, 2022 | hide | past | favorite | 12 comments


Glad to see this becoming more of a thing.

I started talking about this kind of blocklist / bouncer list being made available to individuals in 2018 (https://news.ycombinator.com/item?id=17849513

more about users having access to controllable filters in 2019 - https://news.ycombinator.com/item?id=18816191

things in 2020 made it more obvious the need for such things was coming ( https://news.ycombinator.com/item?id=22521270

Will be glad to see these evolve and become more fine tuned as more people get involved.


>So how would the Russian disinformation campaign unfold in a fully operational decentralised reputation world? Users would subscribe to trusted fact checkers’ reputation lists, which would decrease the rating of the bots and the fake content, leaving the disinformation alone in an echo chamber without an audience.

Sounds like a nightmare. I've seen a LOT of moderation overreach with the primary Matrix homeserver to begin with, and homeservers blocked from federation for seemingly little reason. I do not trust its administrators to 'fact check', nor do I trust its users to approve of it. I think these measures, in the long term, will create an iron hugbox where every lunatic in the asylum exists in his own bubble.

The Matrix network its self is still, as far as I know, still primarily exists within the official homeserver, which has the most users by far. I feel what happens there may have far-reaching consequences to the network, and it would be far better to keep things on a server by server basis rather than making it easier to close off.


>The Matrix network its self is still, as far as I know, still primarily exists within the official homeserver, which has the most users by far.

It is true that there are many users on the matrix.org homeserver, however I recall a claim from user?id=Arathorn (lead dev) that less than 30% of users actually use that homeserver, so I am skeptical that such a thing would have as heavy an impact as you feel it may. Regardless, as the developers of the Matrix protocol and most popular implementations, they sure do have a lot of power to steer how users will interact with it all in the future.


You've missed the whole point of the article, which is to propose decentralising moderation so that communities can abide by whatever rules they want on their own terms. In other words, if you don't like the rules that the Matrix.org server happens to apply, you can go somewhere else and apply whatever rules you want.


I know the point of the article. I believe the current semi-centralized moderation system is vastly preferable to what they propose, because I feel making it easier to block swathes of users and homeservers would have a very negative effect on free discussion.

We've already seen what happens with this style of moderation through things like Twitter blacklists, and these changes would basically integrate such things into the protocol.

If blacklists are commonly accepted and used, but the people maintaining them are biased or incompetent, there stands a chance that your 'own server with whatever rules you want' will not be able to communicate with the vast majority of people. You can see this happening with email now, where large providers like Gmail broadly block smaller ones.


Totally agreed that email DNSBLs are a failed system; they just become a centralised blocking mechanism or protection racket.

The idea that we're experimenting with in Matrix is instead to let anyone publish a reputation list (greylist), and mix together existing ones, and use it to apply whatever rules they like at whatever granularity they like. In other words: at the account level, let users hide users or rooms they dislike. Or at a community level, let moderators block users based on whatever criteria. Or, worst case, at a server level (although we haven't seen much of that in practice, unlike email or activitypub).

For instance, an obvious capability would be for rep lists to also have rep themselves, so you can gauge whether a given rep source has become corrupted, and switch as needed.

In other words: this is primarily about empowering users, rather than empowering admins, and giving users the tools to block stuff on their own terms and make up their own choices, rather than being beholden to a DNSBL-style cabal, or whatever rules their server admin happens to adhere to.


I love this - and if we can make them easy to fork, easy to post discussions about - I can see them becoming very useful - and different groups of people will use and rep up different ones based on their own criteria.

So it may be useful to have different rep points for things like this - I can see some users voting up ones that follow their religious tribe for example, and others that rep up the blocklists that focus on their left/right tribes, some who don't want the X part of list A but want the Y part of list A and B.

ultimately I wonder how these could be posted for open discussion and ease of forking so the processes can really get going.

If users can easily apply these (and remove / switch them) to their browsers and apps - this can be hugely useful.


Can't we just teach critical thinking in schools and how to do research, learn to determine who has credibility in which fields etc?

This is an endless war against miss information. What is wrong today may be right tomorrow. Things change and people need to be smart enough to make their own determination.

People should also be smart enough to figure out whom to vote for and what to vote for not relying on some in TV or where to tell them.


This is an incredibly limited view and nirvana fallacy.

No project can radically change global education system. So of course can't 'just'. And even somehow your wildest fantasy came true and we were all some perfect robots operating only based on rational sorting of peer reviewed papers it would still not solve anything.

No human has even 1/1000000 of a chance to be a perfect expert on all the things that might be discussed in the world.

Science and critical thinking are not universal. No scientific paper and no amount of historical education on history of Crimea will perfectly inform you about what is 'correct'.

Even beyond all that, moderation is targeted add far more things then miss-information. Its about 100 practical issues. From abuse. To Spam. To harassment. To Spoilers. To keep things 'on topic'. And many other things.


Is Matrix really used in such a way that it even needs to solve the captive-audience fan-in problem created by big tech? And if there's a channel of people consensually sharing and coordinating disinfo, then in the limit of free communications, there's not really much to be done. Or is the purpose of this article really to fend off and redirect those calling for censorship?

Meanwhile I'm just siting here wishing I could change the color of my username. Sigh.


"change the color of my username. Sigh." - Has been a top 3 complaints for my users since we started using matrix/element.

I know it's hard because you gotta put code in there and then hope the various clients respect it - and maybe a secondary color choice in case dark mode / non dark mode.. but still I'd put a $100 on bountysource or something to have the needed code given to the matrix peeps to look it over.

regarding comment on the article - I would guess this is a way fend off the cens calls.. similar to other posts - which I appreciate big time.


I think these moderation improvements will be a good thing - not for stopping misinformation or whatever, but for stopping spamming attacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: