I have not had good results and stopped trying. I have had some usable results, but on careful inspection there were subtle problems or needless convolutions that implied a different solution was being used than was actually the case. The sort of thing that works but is prone to misinterpretation by the next one working in the code.
Based on this I'm very against using it for things the user doesn't have significant knowledge of. Some coworkers seem to be having better success but I definitely get the sense they are reading and editing the results carefully. I don't find it that much if any of a productivity gain so I stopped trying for now.
> Some coworkers seem to be having better success but I definitely get the sense they are reading and editing the results carefully.
Yes, you need to consider the AI as if it were a junior programmer that sometimes makes mistakes. I use it for boring work that can be quickly checked. For example, the other day I asked for a 'give me next workday' algorithm based on the code structure I had, and it worked fine.
If it's that straightforward I'd rather just write it. Like I said it hasn't been an overall time saver with the extra scrutiny I need to put it through. I'll try again in six months.
Also idk kinda tangent but you brought it up. I don't feel like my junior devs make easily found algorithmic mistakes like that. They're more likely to misjudge the scope of the problem or not be aware of a technical consideration or known solution. For that kind of work I'd rather... mentor a junior dev through it so they have the experience.
Based on this I'm very against using it for things the user doesn't have significant knowledge of. Some coworkers seem to be having better success but I definitely get the sense they are reading and editing the results carefully. I don't find it that much if any of a productivity gain so I stopped trying for now.