It’s gross, but I am 10000% sure Reddit and the rest of social media is already overflowing with these types of bots. I feel like this project actually does people a service by showing what this looks like and how effective it can be.
I'm pretty sure we saw LLMs in yesterday's thread about the judge. There were a lot of strange comments (stately worded and weird logic that were very LLM like, not just dumb person like) and it wouldn't be surprising as it's an easy tool to weaponize chaos. I'm sure there were bots supporting many different positions. It even looks like some accounts were posting contradictions opinions
If I wanted to just attack and destabilize society I'd have armies of bots supporting all the most divisive positions on all sides, as well as promoting incoherent and irrational positions.
The idea is just to divide, confuse, "flood the zone with shit" as Bannon likes to say.
It seems like people are actually not bad at noticing likely bots arguing against their favorite positions, but are blind to the possibility that there could be bots pretending to be on their side. The most corrosive might be bots pretending to be on your side but advocating subtilely wrong or unnecessarily divisive formulations of your ideas, which in turn are more likely to influence you because they seem to be on your side.
Phrases come to mind like "vandalism of the discourse" and "intellectual terrorism" where the goal is not to promote one specific idea but to destroy the discourse as a whole.
That certainly looks like the world we're living in.
I remember seeing some reports around the BLM protests that claimed Russia organized both a protest and a counter protest via Facebook groups. Not sure how accurate (I believe it) but it certainly is an effective strategy. The old "Divide and conquer" strategy that's thousands of years old.
The buried greens, yeah. I was guessing it was some meatpuppets I was arguing with after looking at account age and post history though. Still considered it worth the effort and trollfeeding to set the record straight for others and yeah, getting harder to tell the difference between puppets, zombies, and bots.
In the back of mind I knew it wasn't so, but I had been holding onto the belief that surely I could discern between human and bot, and that bots weren't a real issue where I spent my time anyway. But no. We're at a point where any anonymous public comment is possibly an impersonation. And eventually that "possibly" will have to replaced with "most likely".
I don't know what the solution is or if there even is one.
There isn't. Not only LLMs are good enough to fool humans like this, but they have been that for quite a while now with the right prompting. A large number of readily available open weights models can do this, so even if large providers were to crack down on this kind of use, it's still easy to run the model locally to generate such content. The cat is well and truly out of the bag.
Agree, this is already happening in mass, if anything this is great to raise awareness and show what can happen.
The mods seem overly pedantic, but I guess that is usually the case on Reddit. If they think for a second that a bunch of their content isn’t AI generated, they are deeply mistaken
While I don't generally agree with the ethics of how the research was done, I do, personally, think the research and the data could be enlightening. Reddit, X, Facebook, and other platforms might be overflowing with bots that are already doing this but we (the general public) don't generally have clear data on how much this is happening, how effective it is, things to watch out for, etc. It's definitely an arms race but I do think that a paper which clearly communicates "in our study these specific things were the most effective way to change peoples' opinions with bots" serves as valuable input for knowing what to look out for.
But what does the study show? There was no control for anything. None of the data is valid. To clarify: how does the research team know the bots were interacting with people and not other bots?
I agree that the study was performed unethically and should not be published but observational studies are totally fine things to do.
A lot of research is "hey we looked at stuff and found this data that wiggles its eyebrows at some idea so we should fund more rigorous study design in the future." An individual paper does not need to fully resolve a question.
The reason not to publish this work is because the data was collected unethically and we don't want to reward or incentivize such work. Nothing to do with the quality of the data itself.
If the mere possibility of AI-generated context invalidates an argument, it suggests the standards for discourse were already more fragile than anyone cared to admit.
Historically, emotional narratives and unverifiable personal stories have always been persuasive tools — whether human-authored or not.
The actual problem isn't that AI can produce them; it's that we (humans) have always been susceptible to them without verifying the core ideas.
In that sense, exposing how easily constructed narratives sway public discussion is not unethical — it's a necessary and overdue audit of the real vulnerabilities in our conversations.
Blaming the tool only avoids the harder truth: we were never debating cleanly to begin with.
> Blaming the tool only avoids the harder truth: we were never debating cleanly to begin with.
Yes, the problem is we humans are susceptible, but that doesn't mean a tool used to scale up the ability to create this harm is not problematic. There's a huge difference between a single person manipulating one other person and a single person manipulating millions. Scale matters and we, especially as the builders of such tools, should be cautious about how our creations can be abused. It's easy to look away, but this is why ethics is so important in engineering.
In this discourse it is often forgotten that we have consumer protection laws for a reason. And that consumer protection has been a pillar of labor struggle for a long time (and consequently undermined by conservative policies).
Scary effective ad campaigns which target cognitive biases in order to persuade consumers to behave against their own interest is usually banned by consumer laws in most countries. Using LLMs to affect consumer (or worse, election) behavior is no different and ought to be equally banned with consumer protection laws.
The existing tools at any given time do very much shape which consumer protection laws are created, and how they are created, as they should. A good policy maker does indeed blame a tool for a bad behavior, and does create legislation to limit how this tool is used, or otherwise the availability of that tool on the open market.
It is also forgotten that we as engineers are accountable as well. Mistakes will happen, and no one is expecting perfection, but effort must be made. Even if we create legal frameworks, individual accountability is critical to maintaining social protection. And with individual accountability we provide protection to novel harms. Legal frameworks are reactive, where the personal accountability is preventative. The legal framework can't prevent things happening (other than through disincentivization), it can only react to what has happened.
By "individual accountability" I do not mean jailing engineers, I mean you acting on your own ethical code. You hold yourself and your peers accountable. In general, this is the same way it is done in traditional engineering. The exception is the principle engineer, who has legal responsibility. But it is also highly stressed through engineering classes that "just following orders" is not an excuse. There can be "blood on your hands" (not literal) even if you are not the one who directly did the harm. You enabled it. The question is if you made attempts to prevent harm or not. Adversaries are clever, and will find means of abuse that you never thought of, but you need to try. And in the case here of LLMs, the potential harm has been well known and well discussed for decades.
What does that look like in practice, assuming an engineer doesn’t believe that the LLM genie can be put back into the toothpaste tube?
“Summarize the best arguments for and against the following proposition: <topic here />. Label the pro/for arguments with a <pro> tag and the con/against a with a <con> tag” seems like it’s going to be a valid prompt and any system that can only give one side is bound to lose to a system that can give both sides. And any system that can give those answers can be pretty easily used to make arguments of varying truthfulness.
- You act on your morals. If you find something objectionable, then object. Vocally.
- You, yourself, get in the habit of trying to find issues with the things you build. This is an essential part of your job as an engineer.
- You can't make things better if you aren't looking for problems. The job of an engineer is to look for flaws and fix them.
- Encourage culture where your cohort understands that when someone is saying "but what about" or "how would we handle" isn't saying "no" but "let's get ahead of this problem". That person is doing their job, they're not trying to be a killjoy. They're trying to make the product better.[0]
- If your coworkers are doing something unethical, say something.
- If your boss does something unethical, say something.
It doesn't matter what your job is, you should always be doing that. As engineers the potential for harm is greater.
But importantly, as engineers IT IS YOUR JOB. It is your job to find issues, and solve them. You have to think how people will abuse the tools you build. You have to think about how your things will fail. You have to think about what sucks. And most importantly, your job is to then resolve those things. That's what an engineer does. Don't dismiss it because "there's no value". The job of an engineer isn't to determine monetary value, that's the business people (obviously you do to some extent, but it isn't the primary focus). I'm really just asking that people do their jobs and not throw their hands up in the air or just pass on blame or kick the can down the road.
[0] I can't express how many times I've seen these people shut down and then passed over for promotion. It creates yes men. But yes men are bad for the business too! You and your actions matter: https://talyarkoni.org/blog/2018/10/02/no-its-not-the-incent...
> I mean you acting on your own ethical code. You hold yourself and your peers accountable
Industry wide self regulation is a poor substitute for actual regulation, especially in this capitalistic environment which rewards profitable behavior regardless of morality or ethics. In this environment the best an engineer can do is resign in protest (and I applaud any engineer which does that in fact), however that won’t stop the company from hiring new engineers who value their salary more than their ethical behavior—or have different ethical standards.
> And in the case here of LLMs, the potential harm has been well known and well discussed for decades.
The harms posed by LLMs are the very same which are caused by any company in the pursue of profits without regulation. In the past the only proven methods to force company to behave ethically has be industry wide regulation, especially consumer protection regulations.
Substitute? No. But you need both. Sorry, it's not the incentives, it is you[0]. As I said before, regulation is reactionary. This is why you need both. No one is saying no regulation, but I (and [0]) is saying things only happen because people do them. I know this is a wild claim, but it is an indisputable fact.
> The harms posed by LLMs are the very same
I expect every programmer and HN user to be familiar with scale. Please stop making this argument. You might as well be saying a nuclear bomb is the same as a Pop-Its. The dangers that LLMs pose is still unknown but if we're unwilling to acknowledge that there's any unique harm then there's zero chance of solving them.
I‘m sorry, but I’m not buying it. Other industries have had the technology for a long time to mass produce low quality products which harms their customers, or worse, bystanders. And in many of those cases the harmful production was never allowed on the market because the regulator was proactive and prevented the technology to be sold to consumers. I know those stories aren’t as prominent as the story of leaded gasoline, because stories of harm spread much further than stories of prevented harm. But they do exist, and are numerous.
I also fail to see why we need both regulation and moral behavior from developers. If the regulation exists, and the regulator is willing to enforce it, any company which goes against the regulation will be breaking the law, and stopped by the regulator. We only need the regulation in this case.
> any company which goes against the regulation will be breaking the law, and stopped by the regulator
And how has that been working so far?
> why we need both
What's your argument here? What is the cost of having both? You sticking your neck out and saying when something's wrong that wrong? You having to uphold your moral convictions?
If regulation works perfectly, you'll never have to do anything, right? But if it doesn't, then you provide a line of defense. So I don't see your argument. You can't expect anything to work 100%, so what then?
Let's be honest here if you believe in things but don't stand up for them when it's not easy to then you really don't believe in those things. I believe in you, I just hope you can believe in yourself.
It won’t matter what I do personally if the company can just hire new engineers (or even outsource the work[1]). Let me repeat what I said above:
> however that won’t stop the company from hiring new engineers who value their salary more than their ethical behavior—or have different ethical standards.
Just because the state of consumer protection is abysmal in our current state of capitalism, that doesn’t mean it has to stay that way, and just because the regulators are unwilling to enforce the few remaining consumer protection laws it doesn’t mean they will never. Before Reagan consumer protection laws were passed all the time, and they used to be enforced, they can so again.
Yes, it doesn't matter if you're the only one that does it, but it does matter if you're not the only one that does. Frankly, many people won't even apply to jobs they find unethical. So yes, they can "hire somebody else" but it becomes expensive for them. Don't act like this (or most things) is a binary outcome. Don't let perfection get in the way of doing better.
> that doesn’t mean it has to stay that way
And how the fuck do you expect things to change if you will not make change yourself? You just expect everyone to do it for you? Hand you a better life on a golden platter? I'm sorry to tell you, it ain't free. You need to put in work. Just like with everything else in life. And you shouldn't put all your eggs in one basket.
Remember, I'm not arguing against regulation. So it is useless to talk about how regulation can solve problems. We agree on that, there's no discussion there. It seems the only aspect we disagree on that part is if regulation works 100% of the time or not. Considering the existence of lawsuits, I know we both know that's not true. I know we both know time exists as well and the laws don't account for everything, requiring them to be reactionary. Remember, laws can only be made after harm has been done. You need to show a victim. SO how do we provide another layer of protection? It comes down to you.
You will not be able to convince me we don't need both unless: 1) you can show regulation works 100% of the time or 2) you can provide another safety net (note you are likely to be able to get me to agree to another safety net but it's probably going to be difficult to convince me that this should be a replacement and not an addition. All our eggs in one basket, right?). Stop doing gymnastics, and get some balls.
> Remember, laws can only be made after harm has been done.
This simply isn’t true. Plenty of regulation is done proactively. You just don’t hear about it as often because, harm prevented is not as good of a story as harm stopped.
For example we have no stories of exporting encryption algorithms to different countries causing harm, yet it is heavily regulated under the belief it will cause harm to national security. Similarly there is no stories of swearing on the radio causing harm, yet foul language is regulated by the FCC. A more meaningful examples are in the regulatory framework in the field of medicine, and if you want scale, the intellectual property of fashion design.
But even so, it can be argued that LLMs are already causing harm, it is mass producing and distributing bad information and stolen art. Consumers are harmed by the bad information, and artists are harmed by their art being stolen. A regulation—even if only reactionary—is still apt at this point.
The series of law-suits you mention only proves my point. We expect companies that break the law to be punished for their action, although I would argue that the regulator is generally far too lazy in pursuing legal actions against companies that break the law.
I'll concede. You're right. But this also is not the norm, despite my best wishes that it was.
> I think our disagreement stems from this belief:
But I still think there's a critical element you are ignoring and I'm trying to stress over and over and over. YOU NEED TO ADDRESS THIS FOR A CONVERSATION TO BE HAD
>> if regulation works 100% of the time or not
>>>> If regulation works perfectly, you'll never have to do anything, right? But if it doesn't, then you provide a line of defense. So I don't see your argument. You can't expect anything to work 100%, so what then?
This concept is littered all throughout every single one of my comments and you have blatantly ignored it. I'm sorry, if you cannot even acknowledge the very foundation of my concern, I don't know how you can expect me to believe you are acting in good faith. This is at the root of my agitation.
> The series of law-suits you mention only proves my point. We expect companies that break the law to be punished for their action
No it doesn't, because you are ignoring my point. I am not arguing against regulation. I am not arguing that regulation doesn't provide incentives.
My claim of lawsuits existing was to evidence the claim
Regulations are not enough to stop the behavior before it occurs.
Again, this is the point you are ignoring and why no conversation is actually taking place.
> although I would argue that the regulator is generally far too lazy in pursuing legal actions against companies that break the law.
Great! So you agree that regulation isn't enough and that regulation fails. You've tried very hard to avoid taking the next step. "WHAT DO YOU DO WHEN REGULATION FAILS?" Seriously, are you even reading my comments? At this point I can't figure out if I'm talking to a wall or an LLM. But either way, no conversation will continue unless you are unwilling to address this. You need to stop and ask yourself "what is godelski trying to communicate" and "why is godelski constantly insisting I am misunderstanding their argument?" So far your interpretations have no resolved the issue, maybe try something different.
I am speaking around it because it seems obvious. If we have good regulation and enforcement of these regulations there is no need for self-regulation. While we don‘t have good regulation, or a regulator unwilling to enforce existing regulation, the go-to action is not to amass self-regulation (because it will not work) but to demand better regulation, and to demand the regulator does their job. That is at least how you would expect things to work in a democracy.
Flooding human forums with AI steals real state from actual humans.
Reddit is already flooded with bots. That was already a problem.
The actual problem is people thinking that because a system used by many isn't perfect that gives them permission to destroy the existing system. Don't like Reddit? Just don't go to Reddit. Go to fanclubs.org or something.
Ok, need you to clarify a few implicit and explicit statements there: The study destroyed the subreddit? The authors of the study believed they had permission to destroy the subreddit? the subreddit is now destroyed? The researchers don’t like reddit? The researchers would achieve their aims by going to fanclubs.org or something?