Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a difference between free flow of information and propaganda. Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

I think you could make a reasonable argument that the algorithms that distort social media feeds actually impede the free flow of information.



> Much like how monopolies can destroy free markets, unchecked propaganda can bury information by swamping it with a data monoculture.

The fundamental problem here is exactly that.

We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

Which means there are no ads, because nobody really wants ads, and so their user agent doesn't show them any. And that's the source of the existing incentive for the monopolist in control of the feed to fill it with rage bait, which means that goes away.

The cost is that you either need a P2P system that actually works or people who want to post a normal amount of stuff to social media need to pay $5 for hosting (compare this to what people currently pay for phone service). But maybe that's worth it.


>We could have social media that no central entity controls, i.e. it works like the web and RSS instead of like Facebook. There are a billion feeds, every single account is a feed, but you subscribe to thousands of them at most. And then, most importantly, those feeds you subscribe to get sorted on the client.

The Fediverse[1] with ActivityPub[0]?

[0] https://activitypub.rocks/

[1] https://fediverse.party/


Something along those lines, but you need it to be architectured in such a way that no organization can capture the network effect in order to set up a choke point. You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead. The protocol needs to make that impossible or the long-term consequences are predictable.


>but you need it to be architectured in such a way that no organization can capture the network effect in order to set up a choke point.

How is that not the case now?

>You need all moderation to be applied on the client, or you'll have large servers doing things like banning everyone from new/small independent servers by default so that people have to sign up with them instead.

I suppose. There are ActivityPub "clients" which act as interfaces that allow the former and act as agents for a single user interacting with other ActivityPub instances. which, I'd expect can take us most of the way you say we should go.

I haven't seen the latter, as there's really no incentive to do so. Meta tried doing so by federating (one-way) with threads, but that failed miserably as the incentives are exactly the opposite in the Fediverse.

I suppose that incentives can change, although money is usually the driver for that and monetization isn't prioritized there.

>The protocol needs to make that impossible or the long-term consequences are predictable.

Impossible? Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

ActivityPub is easily 75% of where you say we should go. Much farther along that line than anything else. But since it's not 100% it should be abandoned/ignored?

I'm not so sure about your "long-term consequences" being predictable. Threads tried to do so and failed miserably. In fact, the distributed model made sure that it would, even though the largest instances did acquiesce.

ActivityPub is the best you're going to get right now. and the best current option for distributed social media.

Don't let the perfect be the enemy of the good.

Edit: I want to clarify that I'm not trying to dunk on anyone here. Rather, I'm not understanding (whether that's my own obtuseness or something else) the argument being made against ActivityPub in the comment to which I'm replying. Is there some overarching principle or actual data which supports the idea that all social media is doomed to create dystopian landscapes? Or am I missing something else here?


> How is that not the case now?

The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.

> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.

> Threads tried to do so and failed miserably.

Threads tried to do that all at once.

The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.

The locking out of smaller competitors needs to be something that the protocol itself is designed to effectively resist.


>> How is that not the case now?

>The protocol allows servers, rather than users, to ban other servers. Servers should be only the dumbest of pipes.

A fair point. A good fix for this is to have individual clients that can federate/post/receive/moderate/store content. IIUC, there is at least one client/server hybrid that does this. It's problematic for those who don't have the computing power and/or network bandwidth to run such a platform. But it's certainly something to work towards.

>> Are you suggesting that since ActivityPub isn't perfect, it should be discarded?

>I'm saying that by the time something like this has billions of users the protocol is going to be a lot harder to change, so you should fix the problems without delay instead of waiting until after that happens and getting deja vu all over again.

I'm still not seeing the "problems" with server usage you're referencing. Federation obviates the need for users to be on the same server and there's little, if any, monetary value in trying to create mega servers. Discoverability is definitely an issue, but (as you correctly point out) should be addressed. It is, however, a hard problem if we want to maintain decentralization.

>The thing that should be in your threat model is Gmail and Chrome and old school Microsoft EEE. Somebody sets up a big service that initially doesn't try to screw everyone, so it becomes popular. Then once they've captured a majority of users, they start locking out smaller competitors.

Given the landscape of the Fediverse, that seems incredibly unlikely. Perhaps I'm just pie in the sky on this, but those moving to ActivityPub platforms do so to get away from such folks.

Adding to that the ability to manage one's own content on one's own hardware with one's own tools, it seems to be a really unlikely issue.

Then again, I could absolutely be wrong. I hope not. That said, I'm sure that suggestions for changes along the lines you suggest to the ActivityPub protocol[0][1][2] as a hedge against making it fall into a series of corporate hell holes, as you put it, "impossible," would be appreciated.

[0] https://github.com/w3c/activitypub

[1] https://activitypub.rocks/

[2] https://w3c.github.io/activitypub/

Edit: Clarified my thoughts WRT updates to the ActivityPub protocol.


There is no generally accepted definition of propaganda. One person's propaganda is another person's accurate information. I don't trust politicians or social media employees to make that distinction.


There is definitely videos that are propaganda.

Like those low quality AI video about Trump or Biden, saying things that didn't happened. Anyone with critical thinking knows that those are either propaganda or engagement farming


Or they're just humorous videos meant to entertain and not be taken seriously. Or they are meant to poke fun of the politician, e.g. clearly politically motivated speech, literally propaganda, but aren't meant to be taken as authentic recordings and deception isn't the intent.

Sometimes it's clearly one and not the other, but it isn't always clear.


'I'm just a comedian guys' interviewing presidential candidates, spouting how we shouldn't be in Ukraine, then the second they get any pushback 'I'm just a comedian'. It's total bullshit. They are trying to influence, not get a laugh.


Downvoted...yet here is the Vice President saying the FCC Commissioner saying 'we can do this the hard way or the easy way' regarding censoring Jimmy Kimmel was 'just telling a joke':

https://bsky.app/profile/atrupar.com/post/3lzm3z3byos2d

You 'it's just comedy' guys are so full of it. The FCC Head attacking free media in the United States isn't 'just telling jokes'.


What you think is propaganda is irrelevant. When you let people unnaturally amplify information by paying to have it forced into someone’s feed that is distorting the free flow of information.

Employees choose what you see every day you use most social media.


Congrats! You are 99% of the way to understanding it. Now you just have to realize that "whoever is in charge" might or might not have your best interests at heart, government or private.

Anyone who has the power to deny you information absolutely has more power than those who can swamp out good information with bad. It's a subtle difference yes, but it's real.


Banning algorithms and paid amplification is not denying you information. You can still decide for yourself who to follow, or actively look for information, actively listen to people. The difference is that it becomes your choice.


Well, this is about bringing back creators banned for (in YouTube's eyes) unwarranted beliefs stemming from distrust of political or medical authorities, and promoting such distrust. They weren't banned because of paid amplification.

I don't quite understand how the Ressa quote in the beginning of this thread justifies banning dissent for being too extreme. The algorithms are surely on YouTube and Facebook (and Ressa's!) side here, I'm sure they tried to downrank distrust-promoting content as much as they dared and had capabilities to, limited by e.g. local language capabilities and their users' active attempts to avoid automatic suppression - something everyone does these days.


Just regulate the algorithm market. Let people see, decide, share, compare


What is the "algorithm market"? Where can I buy one algorithm?


Isn’t one yet, that would be the roll of government to create a market on these large platforms.


OK, but that's an argument against advertising, and maybe against dishonest manipulation of ranking systems.

It's not an argument for banning doctors from YouTube for having the wrong opinions on public health policy.


> distorting the free flow of information

There is no free flow of information. Never was. YouTube and FB and Google saying "oh it's the algorithm" is complete BS. It always manipulated, boosting whoever they feel fit.


And propaganda by definition isn’t false information. Propaganda can be factual as well.


So many people have just given up on the very idea of coherent reality? Of correspondence? Of grounding?

Why? No one actually lives like that when you watch their behavior in the real world.

It's not even post modernism, it's straight up nihilism masquerading as whatever is trendy to say online.

These people accuse every one of bias while ignoring that there position comes from a place of such extreme biased it irrationally, presuppositionaly rejects the possibility of true facts in their chosen, arbitrary cut outs. It's special pleading as a lifestyle.

It's very easy to observe, model, simulate, any node based computer networks that allow for coherent and well formed data with high correspondence, and very easy to see networks destroyed by noise and data drift.

We have this empirically observed in real networks, it's pragmatic and why the internet and other complex systems run. People rely on real network systems and the observed facts of how they succeed or fail then try to undercut those hard won truths from a place of utter ignorance. While relying on them! It's absurd ideological parasitism, they deny the value of the things the demonstrably value just by posting! Just the silliest form of performative contradiction.

I don't get it. Fact are facts. A thing can be objectively true in what for us is a linear global frame. The log is the log.

Wikipedia and federated text content should never be banned, logs and timelines, data etc... but memes and other primarily emotive media is case by case, I don't see their value. I don't see the value in allowing people to present unprovable or demonstrably false data using a dogmatically, confidentally true narrative.

I mean present whatever you want but mark it as interpretation or low confidence interval vs multiple verified sources with a paper trail.

Data quality, grounding and correspondence can be measured. It takes time though for validation to occur, it's far easier to ignore those traits and just generate infinite untruth and ungrounded data.

Why do people prop up infinite noise generation as if it was a virtue? As if noise and signal epistemically can't be distinguished ever? I always see these arguments online by people who don't live that way at all in any pragmatic sense. Whether it's flat earthers or any other group who rejects the possibility of grounded facts.

Interpretation is different, but so is the intentional destruction of a shared meaning space by turning every little word into a shibboleth.

People are intentionally destroying the ability to even negotiate connections to establish communication channels.

Infinite noise leads to runaway network failure and in human systems the inevitably of violence. I for one don't like to see people die because the system has destroyed message passing via attentional ddos.


Fortunately your biased opinion about what information has value is utterly worthless and will have zero impact on public policy. Idealized mathematical models of computer networks have no relevance to politics or freedom of expression in the real world.


There isn’t. Yet, everybody knows what I mean under “propaganda against immigration” (just somebody would discredit it, somebody would fight for it), and nobody claims that the Hungarian government’s “information campaign” about migrants is not fascist propaganda (except the government, obviously, but not even their followers deny it). So, yes, the edges are blurred, yet we can clearly identify some propaganda.

Also accurate information (like here is 10 videos about black killing whites) with distorted statistics (there is twice as much white on black murder) is still propaganda. But these are difficult to identify, since they clearly affect almost the whole population. Not many people even tried to fight against it. Especially because the propaganda’s message is created by you. // The example is fiction - but the direction exists, just look on Kirk’s twitter for example -, I have no idea about the exact numbers off the top of my head


Propaganda wouldn't be such a problem if content wasn't dictated by a handful of corporations, and us people weren't so unbelievably gullible.


indeed, didn't YT ban a bunch of RT employees for undisclosed ties? I bet those will be coming back.


Oh, but can you make an argument that the government, pressuring megacorporations with information monopolies to ban things they deem misinformation, is a good thing and makes things better?

Because that's the argument you need to be making here.


You don't even need to make the argument. Go copy paste some top HN comments on this issue from around the time the actions we're discussing youtube reversing happened.


I think those arguments sound especially bad today, actually. They got the suppression they wanted, but it did not give the outcome they wanted.


Not really. You can argue that the government should have the right to request content moderation from private platforms and that private platforms should have the right to decline those requests. There are countless good reasons for both sides of that.

In fact, this is the reality we have always had, even under Biden. This stuff went to court. They found no evidence of threats against the platforms, the platforms didn't claim they were threatened, and no platform said anything other than they maintained independent discretion for their decisions. Even Twitter's lawyers testified under oath that the government never coerced action from them.

Even in the actual letter from YouTube, they affirm again that they made their decisions independently: "While the Company continued to develop and enforce its policies independently, Biden Administration officials continued to press the company to remove non-violative user-generated content."

So where does "to press" land on the spectrum between requesting action and coercion? Well, one key variable would be the presence of some type of threat. Not a single platform has argued they were threatened either implicitly or explicitly. Courts haven't found evidence of threats. Many requests were declined and none produced any sort of retaliation.

Here's a threat the government might use to coerce a platform's behavior: a constant stream of subpoenas! Well, wouldn't you know it, that's exactly what produced the memo FTA.[1]

Why hasn't Jim Jordan just released the evidence of Google being coerced into these decisions? He has dozens if not hundreds of hours of filmed testimony from decision-makers at these companies he refuses to release. Presumably because, like in every other case that has actually gone to court, the evidence doesn't exist!

[1] https://www.politico.com/live-updates/2025/03/06/congress/ji...


The key problem with the government "requesting" a company do something is that the government has nigh infinite unrelated decisions that can be used to apply pressure to that company.

It's unreasonable to expect some portion of the executive branch to reliably act counter to the President's stated goals, even if they would otherwise have.

And that opportunity for perversion of good governance (read: making decisions objectively) is exactly why the government shouldn't request companies censor or speak in certain ways, ever.

If there are extenuating circumstances (e.g. a public health crisis), then there need to be EXTREMELY high firewalls built between the part of the government "requesting" and everyone else (and the President should stay out of it).


The government has a well-established right to request companies to do things, and there are good reasons to keep it.

For example, the government has immense resources to detect fraud, CSAM, foreign intelligence attacks, and so on.

It is good, actually, that the government can notify employers that one of their employees is a suspected foreign asset and request they do not work on sensitive technologies.

It is good, actually, that the government can notify a social media platform that there are terrorist cells spreading graphic beheading videos and request they get taken down.

It's also good that in the vast majority cases, the platforms are literally allowed to reply with "go fuck yourself!"

The high firewall is already present, it's called the First Amendment and the platforms' unquestioned right to say "nope," as they do literally hundreds of times per day.


How does any of that prevent the government from de facto tying unrelated decisions to compliance by companies? E.g. FCC merger approval?


None of it de facto prevents anything, but if a corporation feels they're being bullied in this way they can sue.

In the Biden admin, multiple lawsuits (interestingly none launched by the allegedly coerced parties) revealed no evidence of such mechanics at play.

In the Trump admin, the FCC Commissioner and POTUS have pretty much explicitly tied content moderation decisions to unrelated enforcement decisions.

Definitely there's possibility for an admin to land in the middle (actually coercive, but not stupid enough to do it on Truth Social), and in those scenarios we rely on the companies to defend themselves.

The idea that government should be categorically disallowed from communicating and expressing preferences is functionally absurd.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: