Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There was some article here on how llm's are like gambling, in that sometimes you get great payouts and oftentimes not, and as psych 101 taught us, that kind of intermittent reward is addictive.




Interesting point, never thought of it like that, and I think there is some truth to that view. On the other hand, IIRC, this works best in instances where it's pure chance (you have no control over the likelihood of reward) and the probability is within some range (optimal is not 50%, I think, could be wrong).

I don't think either of this is true of LLMs. You obviously can improve its results with the right prompt + context + model choice, to a pretty large degree. The probability...hard to quantify, so I won't try. Let's just say that you wouldn't say you are addicted to your car because you have a 1% chance of being stuck in the middle of nowhere if it breaks down and 99% chance of a reward. The threshold I'm not sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: