I had to optimize some Python code to reduce its memory usage. After trying all ideas I could think of, I thought about rewriting it in a different language. Copied and pasted the code into ChatGPT 4. Tried Rust at first, but there were too many compilation errors. Then I tried Go and it worked perfectly. For the next couple of weeks, I used it to improve the Go code, as I've never used Go. It gave me great answers, I think maybe once or twice the code didn't compile (I used it dozens of times per day).
I'm now using the optimized Go code in production.
I default to immediately asking GPT4 to review its solution and fix any mistakes it finds.
There’s also an interesting paper about providing it guidance that it can make a “tree of thoughts” which allows it to move forward and backwards as it comes up with solutions then present its best solution to you. The paper suggests you can squeeze a lot more performance out of LLM’s (even smaller ones) this way. I’ve been wanting to experiment with my prompting in this fashion: https://arxiv.org/pdf/2305.10601.pdf
I didn't keep track of the benchmarks very well. However, it went from taking 3+ days and 180GB of memory to process 80M rows [1] to processing 1.3B rows in ~6 hours using ~90GB of memory.
[1] I stopped the process early, as it was taking too long
I'm now using the optimized Go code in production.