Not familiar with FOL as a formalism, and would love to see this in action. I feel like it's a big part of the solution to propaganda.
The other part seems to be values obfuscation, and I wonder if this would help with that too.
If Joe says that nails are bad, it can mean very different things if Joe builds houses for a living and prefers screws, or if Joe is anti development and thinks everyone should live in mud huts.
Propaganda will often cast a whole narrative that can be logically consistent, but entirely misrepresents a person or people's values (their motivations and the patterns that explain their actions), and there will be logical fallacies at the boundaries of the narrative.
We need systems that can detect logical fallacies, as well as value system inconsistencies.
Intake the following block of text and then formulate it as a steelmanned deductive argument. Use the format of premises and conclusion. After the argument, list possible fallacies in the argument. DO NOT fact check - simply analyze the logic. do not search.
After the fallacies list, show the following:
1. Evaluate Argument Strength: Assess the strength of each premise and the overall argument.
2. Provide Counterarguments: Suggest possible counterarguments to the premises and conclusion.
3. Highlight Assumptions: Identify any underlying assumptions that need examination.
4. Suggest Improvements: Recommend ways to strengthen the argument's logical structure.
5. Test with Scenarios: Apply the argument to various scenarios to see how it holds up.
6. Analyze Relevance: Check the relevance and connection between each premise and the conclusion.
Maybe. One problem we have now is that fact checking is a lot more expensive than bullshitting. If we had a program that could bring things closer to parity it would be nice.
But also, a lot of propaganda isn’t false per se but simply blown out of proportion, or underproportioned in cases of inconvenient truths. The truth is a distribution of events, and editors continuously choose how to skew that distribution.
(One of my very interesting possessions is an old Chinese state-owned newspaper. As far as I could tell, their main tool wasn’t lying, but simply omission.)
For example, if you wanted to push a narrative that e.g. pit bulls are the most dangerous problem in America, you would just post a nonstop stream of pit bull attack videos. It taps into cognitive biases people have which aren’t propositional logic statements.
More broadly, the world is stochastic, at least in the way we experience it. So our brains have to make sense of that, which is an opportunity for narratives to creep in.
So maybe the solution is to have these FOL capabilities close to the user and far from the information source.
FOL values analysis of information streams, that manifest as user interface for configuring the algorithms that decide what information is surfaced to you in media.
This is why I said this sort of thing might be part of a solution. The whole solution would involve other significant parts.
Such a publication would not explicitly come out and say “pit bulls are the most dangerous problem in America”. That’s something that can be easily falsified.
They would say something like “learn the truth about pit bulls” and then feed you an endless barrage of attack footage and anecdotes and emotionally charged information.
The purpose is to shape your priors. If all you see is pit bulls attacking people, your subconscious will rate them more risky. You may not even be able to verbalize why you changed your opinion.
People say that in the future all information will not be directly ingested by people – instead everybody will have a "filter" similar to how we use spam filters, but it'll rewrite information (removing misinformation, adjusting bias, adding references, summarizing and/or expanding <<probably more rare>> etc).
I believe this future (all information being like this) is not far off and it has decent usage percentage already judging from direct traffic decline on some well known information source websites.
Perplexity, phind (as well as upstream chat interfaces now) support internet searching (exploring?) already which does it.
When reading (news and other) articles I find myself more and more often reading them through LLMs to perform above steps. If somebody never tried it, it's really worth, especially for politically biased news articles.
I believe this shift in information consumption is happening more and more for everybody.
Everything will become indirect, likely with multiple layers (ie. extra layer at OS level is likely – this is frankly perfect for use cases like protecting minors: it would be great if you can safely give laptop to your kid knowing that there is ai based content filter you've setup for their age group).
You mention the world being (at least subjectively) stochastic. This brings to mind the idea that a model of probability rather than just logic, might be more beneficial?
The example you gave of focusing excessively on some topic in order to make it seem like a bigger deal…
hm, is there a way we could formalize such things in a way like how formal fallacies are formalized?
It seems more difficult than classifying common formal fallacies.
I disagree with your first point. People are far more rational than you are making them out to be, it's just that they are rational within their own value system, not yours.
Also today's propaganda is capable of adapting itself to each audience member's value system to make it more palatable, and then gradually nudge the audience towards the desired narrative/beliefs/values. The systems that distribute the propaganda are already analyzing people's values and using that information to manipulate people. I think that information asymmetry is part of the problem. I could be wrong, but I think flipping that dynamic around so the public can see the true values of the subjects of propaganda may help neutralize a lot of propaganda.
As far as what impact this specific project will have, I have no idea. You may be right. I'm curious about its limitations and how it can be applied.
I thought so too, but recently so many people dropped or adapted their core beliefs to be able to support and defend people in power that they really love that it made me change my mind. Now I think that value systems are malleable and are formed by whatever makes us feel good. And the logical consistency on top is very optional.
Or maybe these people you speak up assume that the people in power have values aligned with their own, and if there was an unbiased system that highlights value discrepancies using formal logic, they might not "love" those people as much anymore.
What I assume you might be missing is that you are looking at the world through a different lens than these other people. Both you and they are consuming propaganda and can't detect it as propaganda because it aligns with your values. However it subtly nudges your values in a direction over time.
I agree that people's values and core beliefs are malleable, but in the same way a tree trunk is. It may seem like these people have changed a lot and you haven't, but I think it's more likely that you've changed too, and that they've changed less than you think.
No one is immune to propaganda, which is why anything that can help disarm it interests me.
You touched on many points, but one thing to consider: people seek out information that confirms their worldview and actively protect themselves against anything that can harm their feelings for their idols.
John Doe isn’t trained in logic and can adjust any of his premises if it means he can continue to admire his favorite celebrity. It’s a combination of flawed reasoning and premise flexibility.
Not to mention, any fact can be endlessly challenged and questioned even if it’s agreed upon and largely incontestable.
Humans are not fully rational, but they're more rational than many assume. For instance, many thought the illusory truth effect showed that people are biased towards believing things they hear many times over, which is great for propagandists, but it turns out this is only true when they are in a "high quality information" environment. This is quite rational! They should update towards believing repeated statements when the environment they're in has shown itself to be reliable. When the environment they're in has shown itself to be unreliable, the illusory truth effect basically disappears.
How does that explain conservatives doubling down on whatever they hear even if it's obviously false? I guess because they wrongly consider some "low quality information" environments "high quality information" environments?
Not everything can be reduced to this one cognitive phenomenon. The behaviour you describe stems from: confirmation bias, and the backfire effect/identity-protective cognition. Also this isn't exclusive to conservatives:
You link aside, I think the obvious evidence is that that behavior is significantly more common in conservatives. Literally the most basic of facts get denied in bulk. I don't understand how you could make any argument that any other major political affiliation engages in the same behavior to a comparable extent.
> You link aside, I think the obvious evidence is that that behavior is significantly more common in conservatives. Literally the most basic of facts get denied in bulk.
This is modulated by who is currently in power. Conservatives were worse when they lost and Biden was in power. Democrats are ramping up the crazy now that they're the underdogs.
> I don't understand how you could make any argument that any other major political affiliation engages in the same behavior to a comparable extent.
Go check out X and Bluesky and how many people are denying Trump was legitimately elected, and how they are convinced Musk tampered with the voting machines.
As for denying basic facts, there's a whole host of basic scientific facts that people who lean left deny wholesale, eg. heritability of behaviours, personality and other characteristics, differences between groups, denying certain features of sex and the sexes, etc.
I won't claim that the problem is equal on both sides, for many reasons I won't belabour here, but it's not nearly as wide a margin as you're implying. Part of the reason it seems so one-sided to you is my-side bias + the biased coverage the other side gets.
> This is modulated by who is currently in power. Conservatives were worse when they lost and Biden was in power. Democrats are ramping up the crazy now that they're the underdogs.
That isn't remotely true. Conservatives have been consistently in the lead, and there are studies showing how much more prone to believing misinformation they are.
> Go check out X and Bluesky and how many people are denying Trump was legitimately elected, and how they are convinced Musk tampered with the voting machines.
There's at least reasoned arguments for that. That isn't the same thing as rejecting useing masks during a pandemic.
> it's not nearly as wide a margin as you're implying.
It really is, but we clearly disagree.
> Part of the reason it seems so one-sided to you is my-side bias + the biased coverage the other side gets.
You shouldn't make assumptions about how or where I get my news. I don't think coverage bias applies at all in influencing my conclusion based on how I get my news.
> Conservatives have been consistently in the lead, and there are studies showing how much more prone to believing misinformation they are.
No, that's misleading. Conservatives have also been consistently in the lead on "authoritarianism" to the point that it was considered a purely conservative phenomenon, until someone actually thought to ask questions like "what would left wing authoritarianism look like?" and suddenly they found it everywhere.
You seem not to realize how unreliable the data is on these questions. Not only is the replication rate of psychology and sociology ~35%, but the demographics of those fields yields a clear bias on exactly these questions. You simply cannot draw such sweeping conclusions from the unreliable data we have.
When conspiracy and biased thinking are tested directly, as with the study I linked, there is no difference in how the biases impact their thinking. Both sides are extra harsh on their enemies, are overly forgiving of their allies, etc. Confirmation bias and motivated reasoning all around.
> There's at least reasoned arguments for that.
Do you think that there were reasoned arguments for Trump having won in 2020?
> That isn't the same thing as rejecting useing masks during a pandemic.
They could cite reasons for that too, you just don't believe they are valid reasons. It's the same confirmation bias in all cases though.
I think you're envisioning this in a pessimistic way.
I totally agree that the end conclusion "this statement is fallacious" is pretty useless. But I assume that a working process would also yield the chain of judgements (A is right, B is right, C is wrong, etc). I think that would be VERY useful.
People who become captured by propaganda and lies generally are not sold on 100% of the propaganda. There are certain elements they care more about and others they can ignore. A way to deprogram people through conversation is to just ask them to explain things about their views and ask them to reconcile them with reality. The reconciliation is painful for them and that pain keeps people "in" irrational beliefs - but it's also how people find their way out. Once they no longer associate themselves with the conspiracy, they can discard beliefs associated with it...provided they can think through them.
I think being able to automatically decompose a fact check into the elements of what "is true" and "is false" in a statement would be HUGE. An essential tool in helping people escape from information swamps.
I vaguely remember a post I read on reddit [1] around the beginning of COVID by a nurse who dealt with an anti-vax patient. It went along the lines of "Big pharma wants to poison me", "Maybe you're being played and Chinese propaganda wants you to believe that to hurt the US". Apparently induced quite a lot of dissonance.
Fighting fire with fire.
[1] Impossible to find of course. And with all the LARPing going on on there, take this with two grains of salt. Given all the crazy shit going on in the US, I find it totally believable though.
You know First Order Logic. It's just ordinary logic; it's the default thing people think of when they say "logic".
But it's also not very useful for human reasoning. It's good for math and logic puzzles and bad at anything else. It's bad at time, at belief, at negation. None of those things act like you expect them to.
This won't "solve" propaganda or misinfo IMO. Checking logical consistency and eliminating fallacies still wouldn't address the selective presentation or omission of facts, for instance, and the notion that it could avoid misrepresenting a person or their values assumes that someone has already accurately and fully captured a detailed description of a person's values. But that's the whole problem!
This is just the formal specification problem all over again. Verifying software against a spec is good and useful, but verification doesn't tell you whether the spec itself correctly captured the desired objective, it can only tell you whether the spec is logically consistent and that you implemented it faithfully.
It would work as well the internet bringing us more enlightenment. Besides, points of contention tend to form around ethics, whose axioms are unprovable if non-cognitivism is true (and we have no reason to believe it isn't).
The problem isn't values obfuscation. The problem is that many people, especially conservatives, do not care about values. Instead, they care about virtues.
People who approach politics from a virtue ethics perspective are vulnerable to propaganda because logic and value have no bearing whatsoever on their decision to accept or reject a narrative.
You can't think critically for someone else. They must do it on their own.
A virtue exists at the beginning of a narrative. A value is a judgment of the narrative after the fact.
One virtue common in conservative politics is competition. A healthy instance of capitalism is expected to benefit all participants by virtue of competitive markets. The value of our current instance of capitalism is that very large corporations make a lot of cool tech and sell it at low prices.
But what about homelessness? Isn't that a real tangible negative value? Yes. What should we do about it? Well, a conservative will probably tell you that we should help homeless people by making housing (and homeless people) more competitive.
But that's clearly not working! The system does not provide a value that we very seriously need! These arguments don't matter to conservatives, because to them, it's all about the virtues.
You and I are using completely different definitions of "values".
The definition my comment depended on was one where values act as a filter for actions (or patterns of actions).
Drug addicts only value short-term highs (next fix). Someone else may value being a musician, being reliable, or being honest. In 2018 maybe someone would have bought a Tesla because they value being seen as progressive and value experiencing modern technology. Notice that all my examples start with a verb, which can often manifest as a way of being.
I didn't bring up virtues, but my understanding of virtues is that they are values deemed by at least some to be objectively "good", such as the cardinal virtues. Whereas values can be both good or bad, such as masochists who value watching others suffer.
A value is something that you value after evaluating it.
A virtue is presumed to be good. If it were presumed to be bad, it would be a vice. People commit to virtuous behaviors because they expect valuable consequences.
For example, someone who considers honesty a virtue might implement that by choosing to tell the whole truth; or they might implement it by choosing not to tell lies; or even by punishing others who they believe to be dishonest. It is assumed that there is no need to evaluate their behavior, because it was guided by virtue.
Propaganda portends itself to be virtuous. This is important, because a target audience who relies on virtue ethics will not evaluate the narrative.
For example, when conservatives in the US argue against single-payer healthcare, they do not evaluate its merits against the merits of the current insurance system. Instead, they declare its foundational vice: "socialism". Opposite of the ultimate conservative virtue: "capitalism".
It doesn't matter how incoherent this argument is: it isn't an argument at all. It's a claim to virtue.
This is the core principle of conservative politics, and the primary reason conservatives are so vulnerable to fascist narratives coming out of the alt-right.
Thanks for expanding. I think we are in agreement about values and virtues, and I appreciate your perspective on the nuance of presumed virtues as they relate to propaganda, it sounds right to me.
Where you lose me is in the generalization and singling out of conservatives. It sounds like you're saying they are uniquely susceptible to propaganda, yet all my anecdotal experience adds up to it being fairly equal on both sides.
I haven't dug into any formal study so I could be wrong, but I am close to lots of people who are politically left who seem to follow that "presumed virtue" -> reaction (skip critical thinking) pattern. To be clear, my guess is it's a very common and natural pattern, like cognitive biases and optical illusions. It's a consequence/bug of collective cognition.
To be clear, I do not think conservatives are the only ones using virtue ethics. Much of how the social justice movement plays out is a good example of this dynamic.
What makes conservatism unique is that the entire movement is centered on virtue ethics. There is nothing new about this: just look at Reaganomics, the wars on drugs and terror, abortion bans, gay marriage bans, etc. Practically everything about conservative politics is expressed and defended as a virtue.
The next unique thing is that the alt-right has taken over conservative narrative. There are groups of people that literally call themselves fascist, and they aren't just getting attention from conservative politicians: they are writing talking points that are echoed over and over again by the house, the senate, and even the president.
The overwhelming majority of conservatives are not fascists, yet most are evidently happy to work with them. Podcasters and news entertainers are constantly beating the drum of alt-right rhetoric, because it's engaging, and engagement gets them paid. Conservative voters are happy because their team is winning. Fascists are happy because their virtues go mainstream. There is no infighting, because there is no criticism, because there is no evaluation to begin with.
We have ventured far enough outside my sphere of competence that I'm running out of ways to constructively engage, but I'm sure some of your points will linger in my mind.
I have had no direct exposure to what you're describing in your third and forth paragraphs, and so I am not in a position to agree or disagree. All I can say is that I haven't seen it yet. What I have seen is misrepresentation (from both sides) and a pattern of media of all types stoking division.
A few years ago I learned about the concept of "most respectful interpretation" as a tool for conflict resolution and establishing trust in teams. So much of media these days feels like the opposite.
I'm trying my best to understand what's true, while accepting my own limitations and the reality that I may never be able to tell what's really going on at the global power level. At the very least it seems to require a lot of reserving judgement.
If the media is a stained glass window, looking through the blue glass and then the red glass is not the same thing as looking through clear glass.
The other part seems to be values obfuscation, and I wonder if this would help with that too.
If Joe says that nails are bad, it can mean very different things if Joe builds houses for a living and prefers screws, or if Joe is anti development and thinks everyone should live in mud huts.
Propaganda will often cast a whole narrative that can be logically consistent, but entirely misrepresents a person or people's values (their motivations and the patterns that explain their actions), and there will be logical fallacies at the boundaries of the narrative.
We need systems that can detect logical fallacies, as well as value system inconsistencies.