Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do these databases differentiate between AI generated CSAM and CSAM of real victims? (Since many jurisdictions only criminalize real CP)

I know that 99% of people cannot tell an AI image from a real photo since that "Last giant irish greyhound 1902" photo has been going around on social media for weeks, and it is, to me, unbelievably obvious AI.



I assume the answer to that will be that there is no need to differentiate between them. And honestly, I agree with that argument. Possession of CSAM should be illegal regardless of whether it's "real" or not.

But the proposed scanning system is the wrong solution, regardless of any "real or AI" ambiguity, because it's possible to generate false positives with nonsense images that aren't even close to the expected CSAM, real or otherwise.


> I assume the answer to that will be that there is no need to differentiate between them. And honestly, I agree with that

I disagree. The point is to reduce actual child abuse. The images are in a way only tangential. If an image is made with an AI with no actual child being abused, then it shouldn't be a crime.

In a way, it's better, because it will distract the crowd of people into this sort of stuff from activities that harm real people.


> because it will distract the crowd of people into this sort of stuff from activities that harm real people.

This is actually the main point in dispute, and almost everyone arguing one side or the other on this topic seems to assume one side or the other on this point and argue from there, rather than seeking to support their position on the fundamental disputed fact question.

Which results in the most of the debate being people talking past each other based on conflicting assumptions of fact.


Exactly. It may distract a crowd of people into something less harmful. Or, it may perpetuate a behaviour sort of like the commonness of cigarettes leads to more people craving nicotine.


I think it's worth asking: Does synthesized CSAM have an "advertising" effect for real CSAM and CSA?


This feels a bit like "cold reading", I think you're absolutely right, but for all I know you could have been intending to post that comment on half the other threads on the front page.


Its certainly a common-enough phenomenon, what makes it specific to the topic is the specific factual disagreement relevant to this issue that people just assume a side on.


>> I assume the answer to that will be that there is no need to differentiate between them. And honestly, I agree with that

>I disagree. The point is to reduce actual child abuse.

There are limited resources, practically the only way to do this is to make it illegal to have anything that looks real (or looks derived from a real situation, in a 'I will know it when I see it way'). Otherwise, you're just making an almost impassable defence of 'it is fake' or 'I thought it was fake'. Then you can't practically reduce actual child abuse.


You're assuming the truth of your conclusion without testing it. It's equally possible that encountering AI-CSAM is going to incentivize collectors to pay a premium for 'the real stuff', just as many CSAM collectors end up getting caught when they try to make the leap into engaging in abusive activities for real. Your mental model of how CSAM enthusiasts think isn't anchored in reality.


I don't think that's how it works. It's not that CSAM drives them to actual abuse, but that for some CSAM isn't enough to sufficiently satisfy their desires that they go on to real abuse.

Thus I see no reason they would differentiate. With normal adult porn do we care that makeup and such might be involved?


I'm far less sure about this. We don't understand the neurodynamics of sexual desire that well, and lots of research in sex criminals show a pattern of escalation, similar to some kinds of drug addiction.

I don't understand your comparison with adult porn; that might be nonconsensual but typically isn't, whereas CP is nonconsensual by definition because minors aren't legally capable of agreeing. Obviously there are grey areas like two 17-yos sexting each other, but most courts take that context into account.


That's how I see it, also. There is a very clear pattern that prevalence of pornography reduces rape. Why in the world should we expect a different result when we narrow the context?

Yucky as it is I believe the answer here is to have image-generating AIs sign their work. Something properly signed is known not to involve any actual children and would thus be legal.

(My philosophy in general is that for something to be illegal the state should be able to show a non-consenting victim or the undue risk of a victim (ie, DUI). I do not believe disgusting things in private warrant a law.)


> In a way, it's better, because it will distract the crowd of people into this sort of stuff from activities that harm real people.

I see that assertion a lot. If that's how that works, why does the very large amount of CSAM already in existence not have the same effect? Why would synthesized CSAM distract pedophiles from their activities when real CSAM from their fellow pedophiles doesn't?


What are pedophiles' activities? Did you mean abusers?

I don't think they meant it would be 100% effective. And real child pornography may deter future abuse. Research is inconclusive.


You may think that intuitively, but actual studies actually indicate the opposite. Usage of CSAM material leads to increased risks of contacting and abusing children.

This needs to be balanced against rights to privacy and expression, which I personally think take precedence, but pretending that it can serve as harm reduction is just not correct.


I'm personally highly skeptical of the "offering them an outlet" argument. I'd be less suspicious of the idea if its proponents also suggested limiting it to controlled settings, e.g. during meetings with a professional psychiatrist.

But I'm sorry, I just don't believe anyone holed up in their room with a bunch of fake CSAM is "just using it as an outlet" or "protecting real kids from harm." I mean, it almost sounds like a threat: "If you don't let me look at these pictures of fake kids, I'll hurt real kids." If that's the case then they should be seeing a psychiatrist, at minimum.


Pornography reduces rape.

Violent movies that appeal to teens reduce vandalism and the like--they're in the theater rather than out causing trouble. (And it's not displaced, rates don't spike later, they just return to normal.)


Citation needed.


There are so few actual studies of this, and AI images being only a year or so old, that I would not put any weight behind them at this point.

My point isn't that AI CSAM should be legal or not, but whether these tools can differentiate what the lawmakers have decided is a crime or not.


Of course it's related. That doesn't mean it "leads to"--I think that's a case of the cart before the horse.

Those who have no sexual interest in children are neither going to have CSAM nor engage in abuse. The fact that they had CSAM already shows it's a highly non-random sample. The control would be pedophiles with no access to CSAM--but how do you find that control group????


Studies such as?


> Possession of CSAM should be illegal regardless of whether it's "real" or not.

From a purely ethical standpoint: why? What is the purpose of punishing someone who has harmed no one?

No victim means no crime.


From a purely ethical standpoint, sure, I agree. But we live in reality, and there are plenty of activities that seem ethically victimless, but are practically necessary to criminalize, in order to uphold societal frameworks and expectations of morality.

In this case, by giving every CSAM criminal a potential excuse that they "thought it was AI generated," the real victims are further victimized by being deprived of justice or forced to prove their realness.


Broadening the definitions of crime to make it easier to punish the ethically guilty on scant evidence while incidentally sweeping up the ethically innocent is a hack around a legal tradition that is designed exactly on the principal that it is better that the guilty go unpunished than the innocent are punished, by making the genuinely innocent administratively guilty, and we ought to reject that kind of justification every time it rears its head.

(There are times when it is important to have commonality while the choice of the common practice isn't important, which justifies regulations of obviously ethically unimportant things like "which side of the road is it correct to drive on relative to the direction of travel"; but where the purpose of a crime is purely to lower the evidentiary bar to punish people presumed guilty of a narrower crime, that's just an attempt to hack around the presumption of innocence and the burden of proof of guilt.)


Broadening the definition of a crime isn't exactly unheard of.

To choose a less emotional subject, mattress tags.

The ethical reason for mattress tags is because historically people would sell mattresses stuffed full of all sorts of unsavory garbage. What we actually criminalized, or at least were trying to prevent, was some sort of fraud or public endangerment.

But we also along the way made it illegal for sellers to remove the tags from mattresses.

Removing the tag isn't inherently harmful; if you don't deceive the purchaser on the contents of the mattress, it's not even fraud.

But we broadened the definition of the crime to make it easier to enforce.


You are going into forced labelling disclosure, which can have a lot of benefits beyond fraud prevention because it increases informed consent and all sorts of other net goods. It's the logic and ethics of nutrition labels, and IMO is probably one of the more good vs bad things that govts can mandate.


I agree with you. I think our disagreement here is over the level of innocence of someone possessing AI generated CSAM.

If you believe, as I do, that such a person is guilty of a crime, then we're not risking the false guiltiness of an innocent person. At best, we're risking their level of sentencing. And I'm open to the idea of reduced sentences for AI CSAM, but it shouldn't be a factor in determination of guilt (i.e. it should be a matter between the judge and the defendant, rather than something the prosecution needs to prove).

With regards to CSAM criminalization in general, there is a real risk of punishing innocent people that may have been framed by planted evidence. But this is a risk regardless of whether the evidence is "real" CSAM or not, so legalizing possession of AI-generated CSAM doesn't reduce the risk of an innocent person being framed. It might make it "easier" for a bad actor to frame someone, since they can now do it with AI content instead of real content. But if they're already planting evidence, do they really care whether they're committing a crime while preparing the evidence? And besides, if we keep the AI content illegal, then it's equally legally risky to frame someone with it as it is to frame them with real content.

The problem of prosecuting "innocent" people, whether you believe they're innocent because they were framed or because they're only guilty of possessing AI-generated CSAM, should be addressed at the time of enforcement. Stop using entrapment and fishing expeditions as an enforcement mechanism. Only open investigations when they start with a real and identifiable victim, rather than a potential perpetrator.


> I think our disagreement here is over the level of innocence of someone possessing AI generated CSAM.

> If you believe, as I do, that such a person is guilty of a crime,

You just explicitly said upthread that ethically they are not, but argued that it is useful for them to be treated as criminals because it denies an excuse to those who are ethically guilty because they are possessors of genuine CSAM.

You seem to be moving your fundamental ethical premises around in response to it being pointed out that the argument you previously made conflicts with a different widely proclaimed ethical premise.


My ethical premise is that there is no direct victim of AI generated CSAM, but that it's worth criminalizing because otherwise it further victimizes victims of existing law. In other words, there is a societal victim of it. To me it's the same ethical premise but interpreted within two different frameworks: one that's purely idealistic, and one that's based in practical reality.


AI cannot generate CSAM, because AI cannot abuse children. AI makes fictional images, which definitionally cannot be images of child sexual abuse.

There is literally no victim of any kind, even conceptually, in the case of computer generated imagery. It should be protected artistic expression.


What if a police officer generates some AI CSAM and then sells it to someone who thinks it's real? There's still "no victim," but the buyer thinks that there was. Are they guilty of a crime?

Your logic would seem to imply that there's no crime with possession of real CSAM either, and that the only crime lies with the original abuser who took the pictures.


> Are they guilty of a crime?

Unless there is a very specific "attempt to acquire CSAM" law then no they're not fucking guilty of any crime. If you live in a state where marijuana is illegal and you smoke some oregano because you thought it was marijuana you're not guilty of actually possessing marijuana.

A criminal law is composed of a number of individual statutes. When a state is trying to prosecute someone for a crime they need to prove three elements for each statute: the criminal act (actus reus), intent (mens rea), and the concurrence of both of those.

If a cop sells you oregano and you think it's marijuana you might have the intent to buy marijuana but there's no actual criminal act because oregano isn't illegal. If you make a law that only requires intent then congratulations, you've created thought crimes.

If you want to make entirely fake CSAM possession illegal, that's essentially the same as an intent-only law and creates thought crimes. It's a slippery slope.


> If a cop sells you oregano and you think it's marijuana [...]

It wasn't a cop, but I recall a case some years back when someone sold something as cocaine when it wasn't. Among other things, he went down for fraud.


> If a cop sells you oregano and you think it's marijuana you might have the intent to buy marijuana but there's no actual criminal act because oregano isn't illegal. If you make a law that only requires intent then congratulations, you've created thought crimes.

You're a lawyer, I take it? I'm not a lawyer, and I admit your analysis of this scenario confuses me. Is there no legal difference between merely having intent to commit a crime at some point in the future, and actually attempting to commit a crime?


I'm definitely not a lawyer.

> Is there no legal difference between merely having intent to commit a crime at some point in the future, and actually attempting to commit a crime?

That was my point. To be charged with and prosecuted for a crime you need to both intend to commit it and then actually/attempt to commit it. Attempted murder is a crime, I both intend to kill someone and try to do so even if I fail. It's not punished as severely as actual murder but it's still a crime. But attempted murder is actual a specific crime in the criminal code. There's elements of it that need to be proven in court.

Unless a jurisdiction has a crime of "attempted possession of marijuana", intending to buy marijuana but ending up with oregano isn't a crime someone can be charged with. If we start writing laws outlawing attempted possession it's a slippery slope that gets into outlawing thoughts. It also opens the door to stupid pre-crime ideas like someone would only use cryptography to get ahold of illegal content therefore anyone using cryptography is instantly guilty of attempting to get illegal material.

You can be sure this is what will happen because it's the very arguments the anti-cryptography groups use.


Attempted possession of an illegal item/substance is absolutely a crime.

Source: decade in the criminal justice system.


Definitely not a lawyer. You couldn't charge anyone with criminal conspiracy from the parent commenter's perspective.


While in one way you could look at it and say possession shouldn't be illegal the problem is that to possess it someone must have created it. If there's a market in it some people will engage in it to satisfy that market. Thus, possession of real CSAM has an indirect victim.

I had previously proposed that if the abuser has been caught that the victim should get the rights to the images and once they are an adult be allowed to legally sell them (thus a list of legally permitted images), but the AI image revolution has changed that. Have AIs sign their images, CSAM with a proper signature is legal.


There are plenty of "victimless" crimes that society deems unsavory and punishes. e.g. smoking pot in your home, alone is a crime in a lot of jurisdictions, even though clearly no-one is harmed.

In a lot of jurisdictions the decision has been made, whether it is right or wrong, to criminalize AI CSAM. The people have spoken and the lawmakers have made the laws. If you or I think that is wrong then the options are to lobby for a change.


> There are plenty of "victimless" crimes that society deems unsavory and punishes.

That is not an argument against the concept that that should not be the case.


Plenty of actions are crimes without real victims. Not having insurance while driving is an example. Possession of explosives is another one.


In fact, you might even argue that possessing real CSAM has no victim. After all, the person possessing the image isn't the one who committed the abuse and took a picture of it, right? But we've collectively decided that it's worth punishing that crime, because every viewer is an enabler of the abuser. The same logic should extend to AI-generated content.

To put it another way, consider a thought experiment where a police officer generates CSAM with AI and then sells it to someone who thinks it's a real picture of a real victim. We should arrest the buyer, right? They thought they were committing a crime.


You have literally proposed 'thought crimes'.


There's plenty of precedent where police officers pretend to be an underage person and some shmuck replies to them and agrees to meet at a hotel. Then they get arrested, and much of the time they also get prosecuted and convicted. You could argue it's entrapment but the fact is that most of society supports that sort of preemptive law enforcement.

If you looked at it through a purely ethical framework then you could never convict the person because there was never any "real victim." But is that the right way to look at it? It's certainly not the way most people look at it.


Seems like you want to bring all the success of the war on drugs to the war on generated images.


No, I don't support any automated scanning system or really any sort of "going out of our way" to find new criminals.

The reason I think it's a bad idea to differentiate between real or AI is similar to the arguments against "means testing" for distributing benefits. You don't want to put real victims in a situation where they're deprived of justice because they can't prove that their victimization was "real." Imagine a real CSAM criminal claiming a defense that they "thought it was AI generated." Do you want to give them that out?

If protecting those victims comes at a cost of punishing criminals possessing AI-generated CSAM with sentences equally as harsh as those for "real" CSAM, then it's a worthwhile cost to pay. They are still criminals, and they are definitely not innocent (unless they're being framed, but that's a risk with both real and AI images).


It's already a strict liability offense in many places, meaning that you don't even have to know that you possessed the image at all. You could apply the same strict liability standard and say that it doesn't matter that you didn't think it was real as long as it actually was real.

"I thought she was 18" doesn't work for physical sex either.


This is the only reasonable comment in the entire thread. Make CP possession a matter of strict liability and all problems are solved.


>> Imagine a real CSAM criminal claiming a defense that they "thought it was AI generated."

Saying "I thought this heroin was fake" is not a defense when caught with a bag of heroin, I don't see how this would be any different. It's not a magic out for anyone.


That isn’t the argument at all.

It’s that you have a bag of fake heroin, you are then arrested for it because someone thinks that by you having fake heroin you are encouraging real heroin users to do more real heroin.


Well yes, which is why my argument(sorry if it wasn't clear) is that having fake heroin shouldn't be illegal.


Sure it is. The bag of "heroin" on the movie set turns out to be real, think the actors are going to be convicted of possession?


I'm not sure what is the point that you are making here. An actor who was given a fake bag of heroin as a prop which then turns out to be real is no more guilty than a courier moving a package that happens to contain drugs or guns or fake money or anything else - neither would be found guilty of posession.

These are situational circumstances, and no prosecutor in the world would choose to prosecute those - but there is 0% chance you could get away with saying "oh I thought it was fake" if caught with CP on your phone.


I was simply providing an example of where "I thought it was fake" would be a reasonable defense.


I'm on the fence about whether AI CSAM should be illegal or not. There is no scientific consensus either way on whether it increases or decreases a person's thoughts about actual physical abuse.

The issue is that the people (and the legislators) in each jurisdiction have made a choice that AI CSAM is not illegal, and therefore this runs the risk of falsely accusing someone of a crime.

[if you disagree that AI CSAM should be legal in your jurisdiction the solution isn't to arrest everyone, but to petition your lawmakers to change the law]


> I assume the answer to that will be that there is no need to differentiate between them. And honestly, I agree with that argument.

Why do you believe that?


See my comment to a sibling reply. Basically I don't want to make victims prove their victimization was real, and I don't want to give criminals with real victims an opportunity to argue they "thought it was AI generated."


And I absolutely want both parties to have to prove crime/innocence and have an opportunity to argue. The current situation, where anything involving CSAM is so toxic that lives are ruined without trial is not healthy and isn't good for anybody


Point of order: victims are not "parties" in criminal cases, not in remotely modern legal systems. The parties are the accused and the state.

For the same reason, victims don't get to pardon crimes committed against them.

... which is the way it should be, because criminal punishment should not be seen as a form of revenge, but as a deterrent.


Right. Most crimes are essentially ones that "the people" found unsavory.

For instance, there is no "victim" if I got caught enjoying cannabis in my own home in a jurisdiction where such a thing is illegal, but "the people" have made a decision that they don't like it and I should be punished for committing an anti-social act.

That is one of the fundamental aspects of democracy at work.


I am of the opinion that regardless of whether they are both illegal, the penalties for “AI” should be significantly less.


What's your reasoning?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: