I tried Github Copilot, free mode, on #1. The Python code outsourced all the hard work to numpy and pymeshlab, which is fine. Copilot wrote code to generate triangles. It's a reasonable job. Copilot's knowledge of what to call is better than mine.
I have to try it now, but it did OK.
On #2, it set up the problem, but bailed on the gap filling part with
# (This is a complex step; see note at bottom.)
That's not surprising, since I didn't tell it how to solve the problem. Can any of the premium systems do #2?
When I used it, Copilot was for doing autocomplete on little, routine procedures. Like a few operations done on a list. I used it to fill in the blanks of tedious stuff in what I was already coding myself. It worked well at that.
However, if it took creativity or real analysis, I had to throw the big models at it with a carefully-written prompt. You want to use their recent, best, big models. I used to include precise descriptions of data structures and function API's, too, which helped a lot. From there, tweak the description until you find pieces that get the result.
Another thing. It will eventually synthesize code that's close but not quite it. I found that asking it to make small changes, one at a time, would help for a while. Ex: "Modify that code to do X" or "...X in part Y." It eventually started hallucinating in loops. I always had to fix some amount of what it generated but it still saved time.