"And a colleague dismisses it by saying that there is a book "6 Ways to Lie with Statistics"."
Except, that's going in the right direction towards a better argument: empiricism requires your statistics to be peer reviewed for errors or deception before being believed. That takes a skilled individual.
So, you either think they're very good at statistics or you want them to put faith in your work. Otherwise, they need a smart assistant they trust to review the statistics. Then, they have increased confidence in your solution but it still might be wrong.
He was not calling for better statistics, he suggested to ignore statistics.
It was a simple case and actually I was not presenting a statistics I collected, I just suggested to try using some numerical evidence to chose a decision.
On another occasion I mentioned to somebody that it's necessary to chose drugs or medical approaches verified with medical trials and double blind method. And they replied that there is a book about how to lie with statistics and continued to consider unverified methods.
I mean that in real life sometimes very simple fallacies happan.
Some statistics-based deсisions may be wrong => right decision must avoid statistics.
These cases could probably be adressed with automated tools of the near future.
I think it is easy to highlight the mistake by rephrasing the argument as OP did. By rewriting it that way, I think it is easy to see that the fallacy lies in taking some specific observations and saying that those are valid in general, without providing a solid explanation as of why that should be the case. Another way of rephrasing it could be, "I have evidence of some statistics-based decisions that were wrong, therefore all statistics-based decisions are wrong". If a person still doesn't get it, put it as "I have evidence of you being wrong once, therefore you must be always wrong (including in this specific case :D )".
But what could I do to help people get better in the __act__ of spotting this or doing what you desribe themselves.
Idk for me it is subconcious, I just "feel" it or know that it is logically faulty.
Also the rephrasing doesnt always work imo, you could have a logic statement, that is totally valid in some context and not valid in others. And you also need to think about the validity of the premise and if it is legit to draw the conclusion in natural language.
The paper proposes to use automated tools to help people avoid logic errors. :)
I doubt anyone is completely free from logic errors. I am certainly not.
But while seeing logic errors other make, we are often blind to our own errors logic - if we were aware of them we would correct them.
Just recently while considering a logical fallacy in somebody's argument I realised that on another occasion I myself used similar argumentation. So only when noticed the problem in somebody else I was able to recognize my own error.
There are many reasons people make logic errors. For example, if they like the conclusion they easier accept even flawed argumets in support.
I believe that is a misunderstanding of what the other party is expressing in this case. It's not "wrong once, so always wrong" but rather "intentional deception in the past, therefore might intentionally deceive again, and I don't know how to verify, therefore I shouldn't trust it".
Uhm, but then, it would work only if applied to the same entity that was deceptive in the past. Applying this to some other party would still make little to no sense, unless you've got some reason to believe this other party wants to deceive you as well.
In science, we're required to take a default deny approach. The scientific method treats everything as wrong or a lie by default. Then, a combination of independent review and replication add confidence to the claims. Assuming you trust that person.
Later on, media reported pervasive problems in reviewer independence (especially drug studies), statistical claims (especially p hacking), and replication ("replication crisis"). So, we now have more reason to not trust scientific claims without independent replication. That's double true for statistical claims.
If I read one, I say "maybe true, maybe not." At least brings me above a random guess in a belief on that topic. If they're highly biased, I ignore their claims on that topic entirely since cherry-picking evidence is so common. Certain sources have enough independent betting or positive outcomes to trust by default in a probabilistic sense. A person whose chips work is fairly trustworthy on basic, chip design and their own product designs. I default on believing their essential claims but know they might be disproven later.
That's how empiricism works. Anyone doing less is probably using some combo of faith, testimony, logic, or feelings. They can also dress these up with scientific language or mathematical formulas, too. But was it rigorously reviewed by someone who doubted everything about it? Often not for statistical or scientific-sounding claims.
> The scientific method treats everything as wrong or a lie by default
No, the scientific method requires proof both for a positive and for a negative answer. If we can't prove neither, all we can say is that we don't know whether something is true or false. Think i.e. about Riemann's hypothesis or the Collatz conjecture: should we say those are wrong because no one so far proved them to be correct?
Unless someone obviously shares common goals with you they are a potential adversary. When faced with a tool that you are confident can be used to deceive you, and a potential adversary who you are confident is aware of this fact, you should then clearly distrust that tool in that context.
I'm not sure that "distrust of thing I don't understand" can really be considered a fallacy. Certainly it sounds like the other party's tone wasn't constructive in this case. It also sounds like they are fairly ignorant.
Still, the underlying sense that you shouldn't trust people making claims based on things that you don't understand is probably a fairly solid survival strategy in general. Better to miss out than get scammed.
To put it another way, a call to "trust the science" in the absence of further elaboration is itself an appeal to authority. Despite that, it's not actually wrong - you generally should trust openly published science that has been reproduced by at least one unrelated party. Which serves to illustrate the rather glaring issue with the premise of the linked article, at least for practical everyday use.
The fallacy was that people consider presence of statistical evidence as a negative sign. Not realizing its possible to lie without statistics as well.
Lets imagine a book "100 ways to harm your health with medicine", and a sick person choosing between magic and medicine: "Aha, the book has proven that medicine is harmful, so of course magic".
Indeed that would be the wrong conclusion to jump to.
However it isn't how I read the original example. I saw it more as "A is backed by evidence B" rebutted with "I don't trust evidence B because ...". Despite the described tone being poor and the individual obviously horribly ignorant, when assessed from their (apparent) point of view instead of my own that position seems fairly reasonable to me.
In other words, not so much "magic instead of medicine" as rejecting the claim that medicine is superior to magic while also declining to hold the view that magic is superior to medicine.
What you describe can be called healthy critical thinking.
The case I mentioned was different, I just described it poorly due to my limited English profficiency and typing on mobile, so you and others suspect the opponent meant reasonable doubts.
Anyways, logical fallacies are ubiquitous. And of the level of the simplest Aristotelian logic, not even requiring first order logic.
An automated tool capturing may probably be useful.
Arguments similar to "bananas are yellow, so if I see something yellow that's a banana" are quite often in politics.
The paper cites some previous studies of fallacies in an online forum and in argumentative essays.
Except, that's going in the right direction towards a better argument: empiricism requires your statistics to be peer reviewed for errors or deception before being believed. That takes a skilled individual.
So, you either think they're very good at statistics or you want them to put faith in your work. Otherwise, they need a smart assistant they trust to review the statistics. Then, they have increased confidence in your solution but it still might be wrong.