Hacker News new | past | comments | ask | show | jobs | submit login

I had a similar experience earlier. Described a problem that isn't even that hard - very similar to something there are probably lots of examples of online but subtly different. I wanted to see if handled these subtly different requirements.

It failed miserably, even with repeated instructions. It just assumed I wanted the more common problem. Every time I pointed out the problem it would say "sorry for the confusion, I've fixed it now" and give me back identical code. I even asked it to talk me through test cases. It identified that its own code didn't pass the test cases but then still gave me back identical code.

I eventually gave up.




I’ve found persistence is not a good strategy with GPT. Put effort into your prompt, maybe try clarifying once, and if it doesn’t work, do not keep trying. It will get closer to the solution at a diminishing rate, just enough to tease you along, never getting there.


It has failed every meaningful programming challenge I've given it (to be fair, I only ask when I've got something difficult in front of me).

I do wonder if part of it is that my prompts are made worse because I have a partial solution in mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: