Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs, by their very nature are probabilistic

I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input.



That only works if you decide to stick to that exact model for the rest of your life, obviously.


The point is he said "by its nature". A transformer based LLM when called with the same inputs/seed/etc is literally the textbook definition of a deterministic system.


No one uses temperature 0 because the results are terrible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: