Traditional GANs typically have a different problem called mode collapse, in which the generator learns one (or a few) outputs that the discriminator likes. Conditional GANs are given a condition as input and have more chance of overfitting, as the loss function is a combination of the GAN output and a supervised loss based on the condition.
Which is interesting because GANs tend not to overfit.