Hacker News new | past | comments | ask | show | jobs | submit login

I've been using Gemini 2.5 and Claude 3.7 for Rust development and I have been very impressed with Claude, which wasn't the case for some architectural discussions where Gemini impressed with it's structure and scope. OpenAI 4.5 and o1 have been disappointing in both contexts.

Gemini doesn't seem to be as keen to agree with me so I find it makes small improvements where Claude and OpenAI will go along with initial suggestions until specifically asked to make improvements.




I have noticed Gemini not accepting an instruction to "leave all other code the same but just modify this part" on a code that included use of an alpha API with a different interface than what Gemini knows is the correct current API. No matter how I promoted 2.5 pro, I couldn't get it to respect my use of the alpha API, it would just think I must be wrong.

So I think patterns from the training data are still overriding some actual logic/intelligence in the model. Or the Google assistant fine-tuning is messing it up.


I have been using gemini daily for coding for the last week, and I swear that they are pulling levers and A/B testing in the background. Which is a very google thing to do. They did the same thing with assistant, which I was a pretty heavy user of back in the day (I was driving a lot).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: