Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even with Stable Diffusion, determinism is “best effort”- there are flags you can set in Torch to make it more deterministic at a performance cost, but it’s explicitly disclaimed:

https://pytorch.org/docs/stable/notes/randomness.html




The base models of stablediffusion were always deterministic if you use a deterministic noise scheduler...


I think they’re referring to CUDA (and possibly other similar runtimes) being able to schedule floating point ops non-deterministically, combined with floating point arithmetic being potentially non-associative. I’m not personally sure how big an issue that would be for the output though.


I have never spotted any difference when regenerating (a recent) image with the same settings/seed/noise and I do it often. Haven't compared the bits though.

Older images are often difficult to reproduce for me - I believe due to changes in tooling (mostly updating Auto1111).


Differences in output are generally varying levels of difficulty of “spot the difference” and rarely changes the overall image composition by much. I always use nondeterministic algos and it doesn’t have any affect on my ability to refine prompts effectively.


Yeah this is what I was referring to: GPU/FP issue which, btw, had been explained to me in the past here on HN...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: