They don't want you creating images that mimic either works of other artists to an extent that's likely to confuse viewers (or courts), or that mimic realistic photographs to an extent that allows people to generate low-effort fake news. So they impose an intentionally-crappy orange-cyan palette on everything the model generates.
Peak quality in terms of realistic color rendering was probably the initial release of DALL-E 3. Once they saw what was going to happen, they fixed that bug fast.
SDXL and FLUX models with LoRAs can and do vastly outperform at tons of things singular big models can't or won't do now. Various subreddits and civitAI blogs describe comfyui workflows and details on how to maximize LoRA effectiveness and are probably all you need for a guided tour of that space.
This is not my special interest though but the DIY space is much more interesting than the SaaS offerings; this is something about generative AI more generally that also holds, the DIY scene is going to be more interesting.
OpenAI's new image generation model is autoregressive, while DALL-E was diffusion. The yellowish tone is an artefact of their autoregressive pipeline, if I recall correctly.
Could be. My point is that if the pipeline itself didn't impart an unmistakable character to the generated images, OpenAI would feel compelled to make it do so on purpose.
Most DALL-E 3 images have a orange-blue cast, which is absolutely not an unintended artifact. You'd literally have to be blind to miss it, or at least color-blind. That wasn't true at first -- check the original paper, and try the same prompts! It was something they started doing not long after release, and it's hardly a stretch to imagine why.
They will be doing the same thing for the same reasons today, assuming it doesn't just happen as a side effect.
Peak quality in terms of realistic color rendering was probably the initial release of DALL-E 3. Once they saw what was going to happen, they fixed that bug fast.