Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One trick I found is to tell the llm that an llm wrote the code, whether it did or not. The machine doesn't want to hurt your feelings, but loves to tear apart code it thinks it might've wrote.


I like just responding with "are you sure?" continuously. at some point you'll find it gets stuck in a local minima/maxima, and start oscillating. Then I backtrack and look at where it wound up before that. Then I take that solution and go to a fresh session.


Isn’t this sort of what the reasoning models are doing?


Except they have no concept of what "right" is, whereas I do. Once it seems to gotten itself stuck in left field I go back a few iterations and see where it was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: