Hacker News new | past | comments | ask | show | jobs | submit login

As far as IRB violations go, this seems pretty tame to me. Why get so mad at these researchers—who are acting in full transparency by disclosing the study—when nefarious actors (and indeed the platforms themselves!) are engaged in the same kind of manipulation. If we don’t allow it to be studied because it is creepy, then we will never develop any understanding of the very real manipulation that is constantly, quietly happening.



> Why get so mad at these researchers—who are acting in full transparency by disclosing the study—when nefarious actors (and indeed the platforms themselves!) are engaged in the same kind of manipulation.

I’m mad at both of them. Both at the nefarious actors and the researchers. If i could I would stop both.

The bad news for the researchers (and their university, and their ethics review board) they cannot publish anonymously. Or at least they can’t get the reputational boost they were hoping for. So they had to come clean. It is not like they had an option where they kept it secret and still publish their research somehow. Thus we can catch them and shame them for their unethical actions. Because this is absolutely that. If the ethics review board doesn’t understand that then their head needs to be adjusted too.

I would love to stop the same the nefarious actors too! Absolutely. Unfortunately they are not so easy to catch. That doesn’t mean that i’m not mad at them.

> If we don’t allow it to be studied because it is creepy

They can absolutely study it. They should get study participants, pay them. Get their agreement to participate in an experiment, but tell them a fake story about what the study is about. Then do their experiment, with a private forum of their own making, and then they should de-brief their participants about what the experiment was about and in what ways were they manipulated. That is the way to do this.


> If we don’t allow it to be studied because it is creepy, then we will never develop any understanding of the very real manipulation that is constantly, quietly happening.

What exactly do we gain from a study like this? It is beyond obvious that an llm can be persuasive on the internet. If the researchers want to understand how forum participants are convinced of opposing positions, this is not the experimental design for it.

The antidote to manipulation is not a new research program to affirm that manipulation may in fact take place but to take posts on these platforms with a large grain of salt, if not to disengage with them for political conversations and have those with people you know and in whose lives you have a stake instead


"Bad behavior is going happen anyway so we should allow researchers to act badly in order to study it"

I don't have the time to fully explain why this is wrong if someone can't see it. But let just mention that if the public is going to both trust and fund scientific research, they have should expect researchers to be good people. One researcher acting unethically is going sabotage the ability of other researchers to recruit test subjects etc.


> As far as IRB violations go, this seems pretty tame to me

Making this many people upset would be universally considered very bad and much more severe than any common "IRB violation"...

However, this isn't an IRB violation. The IRB seems to have explicitly given the researchers permission to this, viewing the value of the research to be worth the harm caused by the study. I suspect that the IRB and university may get in more hot water from this than the research team.

Maybe the IRB/university will try to shift responsibility to the team and claim that the team did not properly describe what they were doing, but I figure the IRB/university can't totally wash their hands clean


I would not consider anything that only makes people upset anywhere close to the "very bad" category.


Yeah the IRB is concerned about things like medical research. You are absolutely allowed to lie to psych research participants if you get approval and merely lying to research subjects is considered a minor risk factor.


Unless you happen to be the most evil person on the planet, someone else is always behaving worse. It's meaningless to bring up.

Even the most benign form of this sort of study is wasting people's time. Bots clearly got detected and reported, which presumably means humans are busy expending effort dealing with this study, without agreeing to it or being compensated.

Sure, maybe this was small scale, but the next researchers may not care about other people wasting a few man years of effort dealing with their research. It's better to nip this nonsense in the bud.


“How will we be able to learn anything about the human centipede if we don’t let researchers act in full transparency to study it?”


Bit of a motte and bailey. Stitching living people into a human centipede is blatantly, obviously wrong and has no scientific merit. Understanding the effects of AI-driven manipulation is, on the other hand, obviously incredibly relevant and important and doing it with a small scale study in a niche subreddit seems like a reasonable way to do it.


At least part of the ethics problem here is that it'd be plausible to conduct this research without creating any new posts. There's a huge volume of generative AI content on Reddit already - and a meaningfully large %ge of it follows predictable patterns. Wildly divergent writing styles between posts, posting 24/7, posting multiple long-form comments in short time periods, usernames following a specific pattern, and dozens of other heuristics.

It's not difficult to find this content on the site. Creating more of it seems like a redundant step in the research. It added little to the research, while creating very obvious ethical issues.


That would be a very difficult study to design. How do you know with 100% certainty that any given post is AI-generated? If the account is tagged as a bot, then you aren’t measuring the effect of manipulation from comments presented as real. If you are trying to detect whether they are AI-generated, then any noise in your heuristic or model for detecting AI-generated comments is then baked into your results.


The study as conducted also suffers those weaknesses. The authors didn’t make any meaningful attempt to determine if their marks were human or bots.

Given the prevalence of bots on Reddit, this seriously undermines the study’s findings.


> At least part of the ethics problem here is that it'd be plausible to conduct this research without creating any new posts.

This is a good point. Arguably though if you want people to take the next cambridge analytica or similar as something serious from the very beginning, we need an arsenal of academic studies with results that are clearly applicable and very hard to ignore or dispute. So I can see the appeal of producing a paper abstract that's specifically "X% of people shift their opinions with minor exposure to targeted psyops LLMs".


Intentionally manipulating opinions is also obviously wrong and has no scientific merit either. You don't need a study to know that an LLM can successfully manipulate people. And for "understanding the effects" it doesn't matter whether they spam AI generated content or analyse existing comments written by other users.


It’s the same logic. You just have decided that you accepted in some factual circumstances and not others. If you bothered to reflect on that, and had any intellectual humility, you might take pause at that idea.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: