Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fraudulent data raise questions about superstar honesty researcher (2021) (science.org)
54 points by dustincoates on July 14, 2023 | hide | past | favorite | 33 comments


On HN there was a thread about the correspondence with another professor Aimee Drolet Rossi:

https://news.ycombinator.com/item?id=36684242

https://openmkt.org/blog/2023/ucla-professor-refuses-to-cove...


Yes - just a few days ago:

UCLA professor refuses to cover for Dan Ariely in issue of data provenance - https://news.ycombinator.com/item?id=36684242 - July 2023 (130 comments)

Also related:

Noted study in psychology fails to replicate, crumbles with evidence of fraud - https://news.ycombinator.com/item?id=28264097 - Aug 2021 (102 comments)

A Big Study About Honesty Turns Out to Be Based on Fake Data - https://news.ycombinator.com/item?id=28257860 - Aug 2021 (90 comments)

Evidence of fraud in an influential field experiment about dishonesty - https://news.ycombinator.com/item?id=28210642 - Aug 2021 (51 comments)

Related but different story:

Harvard ethics professor allegedly fabricated multiple studies - https://news.ycombinator.com/item?id=36665247 - July 2023 (215 comments)

Harvard dishonesty expert accused of dishonesty - https://news.ycombinator.com/item?id=36424090 - June 2023 (201 comments)


Title should specify '2021'. I wonder if there's been any updates since the accusations first surfaced, two years ago.


Yep, there's new evidence that one of Ariely's frequent (honesty research) collaborators (Gino) was falsifying their data too https://datacolada.org/109



As someone who's spent a lot of years in an academic social science field and in analytics/modeling more generally, I feel strongly that most educated people have way too much trust in academic literature.

Outright generated data like this is rare and dumb, instead it's very common to play around with data cleaning/variable selection/modeling assumptions to get whatever result you want. I've seen it from grad students, tenured academics, DS/MLE in big tech, basically anywhere where you mix high pressure & neurotic personalities with analytics work.


Trusting the science becomes very hard, when you see things like this. I wonder how many business processes based on psychology are just outright fraudulent.

It would explain why so many engineers are upset, when we try some of the newer methodologies in practice.


Really poor that universities don’t ensure their professors are scrupulous.

Professors have tenure and tons of protection and should be held to a higher standard. The least a university can do is make sure they are honest:

“But given other universities’ past reluctance to investigate their own researchers, they are skeptical that Duke will conduct one. That may leave Ariely’s supporters insisting he is innocent and detractors assuming he is guilty, Nelson says. “No one knows. And that’s terrible.”



> Some time later, a group of anonymous researchers downloaded those data, according to last week’s post on Data Colada. A simple look at the participants’ mileage distribution revealed something very suspicious. Other data sets of people’s driving distances show a bell curve, with some people driving a lot, a few very little, and most somewhere in the middle. In the 2012 study, there was an unusually equal spread: Roughly the same number of people drove every distance between 0 and 50,000 miles. “I was flabbergasted,” says the researcher who made the discovery.

It is kind of hilarious that they would fail at the simple task of generating fake but credible random data. Barely statistics literate? Probably using the simplest means available in Excel.


Honestly... it was probably an undergrad or contractor they tasked with fetching the data from the insurance company who decided to pad the numbers. Or maybe a desperate PhD student.

Which doesn't absolve the PI, who fundamentally has to stand behind the data, but it makes more sense than the PI being unable to fake convincing data.


What does PI stand for?


Principle Investigator. Basically, the individual (professor) responsible for a research project. They then employ a bunch of people to actually run the experiment.


I used to work as an insurance analyst and regularly dealt with customer-reported mileage distributions. It boggles the mind that this data was used for an academic paper when any junior analyst in an insurance context could tell you it's nonsensical with a glance. To me it just goes to show how far away these researchers are from the domain of the data that they're using in these studies. Kind of lowers the credibility of social sciences as a whole, unfortunately.


Well, the point of the study was too figure out who would be truthful, so it sounds like the data is exactly right then


There are more interesting studies to be done with this data IMO, which the researchers could have done if they had cared enough to talk to someone in the field.

As an example, we rarely saw completely normal "bell curves" with reported mileage. We often saw a roughly gaussian shape between 10k-30k, with a "J Curve" under 10k, where some % of dishonest people would report their mileage as absurdly low.

Where permitted by regulators, we would actually rate on this. If you had a single car, reported yourself as fully employed outside of the home, and also reported 5k mileage per year, you would receive a SURCHARGE compared to someone who reported 15k, because there was signal about your likelihood to make a claim in the fact that you were lying. The signal disappeared if you looked at people with more plausible arrangements, like having 2 cars, one of which had low mileage, had a single low-mileage vehicle but were self-employed (possibly WFH), etc...

I have to believe a clever researcher could find some interesting results with such data.



It appears the honesty experts were more skillful than the dishonesty expert.


If true, is this clearly a crime?


I heard Ariely speak in Arizona around 2010. He described an experiment which implied precognition. Folks supposedly pressed a button in reaction to a random sensation before it had time to reach their brains. Even assuming a direct extracorporeal path between the point and the lower brain and the speed of light in a vacuum.

It was raised with a "we don't know what we don't know" smile and shrug. It discredited him, at least partly, as a scientist in my eyes.


I have heard about that experiment too and I am now pretty sure it can be explained by chronostasis https://en.m.wikipedia.org/wiki/Chronostasis


Thank you. Ariely started by describing chronostasis, the subjective temporal muddling of sensations. Then he dropped the "and one more thing" shtick to disclose this other experiment as externally validating the precognition. (À la Chiang's "What's expected of us" [1].)

He was suggesting to a lay audience the plausibility of superluminal causation on the basis of a single psychology experiment. That's not lying. But it's without scientific integrity.

[1] https://www.nature.com/articles/436150a


I looked it up now.

"In the libet Experiment, Libet asked subjects when they felt the intention to move, he asked them to watch the second hand of a clock and report its position when they felt that they had felt the conscious will to move.

Libet found that the unconscious brain activity leading up to the conscious decision by the subject to flick their wrist began approximately half a second before the subject consciously felt that they had decided to move."

<--- the bug could be chronostasis. They subjects perception of time is warped when looking at the clock.


> he asked them to watch the second hand of a clock and report its position

This is chronostasis. It's requesting a subjective read. Ariely claimed folks pushed the button before the event. That's objective foresight. That's superluminal causation.


Did Michael Mann fudge more weather data?


Remember kids, never deny science!!


Science is all about being critical.

You just need proof to back up your different idea.


I upvoted you because you are 100% right. The sad reality is that in recent times important and influential people in society have decided that doing your own research to form your own opinions and ideas is insanity and anti-science. Even if you are a practicing scientist you won't even get a chance to be heard because the science is settled and wrong think is dangerous!

How can anyone be critical of science if it is already considered settled?


> doing your own research

Reading online articles is not doing your own research

> Even if you are a practicing scientist you won't even get a chance to be heard because the science is settled and wrong think is dangerous!

This is an oversimplification of a complex situation.

Redoing established work doesn't get the same level of funding as new work.

New work that ignores the existence of others work doesn't get funding for failing to prove why it is necessary.

You can get funding for unique ideas you just won't get it for say "proving vaccines cause autism" unless you can have a paper idea that deals with the mountains of evidence that shows that link doesn't exist.

> How can anyone be critical of science if it is already considered settled?

There is a difference between being critical based on evidence and being critical based on ideas. "I disagree" isn't a reason to dismiss existing science. Similarly a singular study with small sample sizes isn't important compared to a multitude of studies with the opposite conclusion.


This is why trusting the science is a really unscientific thing to do.


You misspelled "person", with overall phrasing suggesting "FUD" (i.e., propaganda / rhetorical techniques).

Science, in terms of the scientific method, is about the only thing we can trust. In fact, for those who really want to know something about the external, the scientific method is the way to do so. Now, even this phrase has been propagandized, in recent years. But, the workflow is something even children work out and use implicitly. E.g., hypothesis: "I believe my parents won't notice if one extra cookie has been eaten"; test: on different days, sometimes I eat an extra cookie, sometimes I don't (imagining the "advanced" "juvenile delinquent" / "budding 'scientist'"); result: what days did my parents notice, yell at me, etc.

In the world more generally, one of the ways we correct for the error introduced by crappy human beings / information (unintentional and not), is repeatability. This is incessantly drummed into people who choose science as a career-path, at any credible institution. And, it's a key factor, especially in the face of continuing efforts by propagandists of all stripes to turn the world into "Idiocracy" - a reference I make not for rhetorical purposes vis-a-vis its title, but rather because the actual content of the film truly matches what current trends, taken decades out, realistically likely turn the world into.


> why trusting the science is a really unscientific

Yes, it is. The operating function of science is to question. What's unscientific is leveling the field between hypotheses with some data and those without, or worse, elevating an unsubstantiated guess above a theory with a track record.


I want to get "the science" tattooed on my knuckles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: