I've only tried Claude in Cursor's free trial, but it became very confused during a React 19 upgrade even after showing it a guide. It also kept telling me to install irrelevant dependencies saying my code would not work without them. My experience was more or less shared with several other senior devs using their companies AI subscriptions. Are we all using the tools wrong? Or is there just an expected amount of having to fight with the machine to get a usable result?
> I've only tried Claude in Cursor's free trial, but it became very confused during a React 19 upgrade even after showing it a guide.
I have never used Cursor, but I see more and more people who used LLMs specifically via Cursor complain. It makes me think there's an issue specifically with Cursor, e.g., they try to save on tokens and end up not including enough context in the prompt.
> I've only tried Claude in Cursor's free trial, but it became very confused during a React 19 upgrade even after showing it a guide.
I am working on a Next.js 15 / React 19 app, and at least 95% of code is written by Gemini 2.5 Pro Preview, and I barely ever need to "fight" it really. But it's not a Cursor workflow of course. I keep a directory with LLM-readable documentation[0], use a repository serialization tool to serialize my repository into a prompt (this includes both the source code and the LLM-readable documentation), and send it to Gemini 2.5 Pro Preview. It ends up being over 100K tokens per query but works really well for me.