I don’t rely on LLMs as much as I did when copilot came out, but I 100% agree regarding static types: they make it so much easier to tell when the LLM is spewing nonsense. I probably wouldn’t use one at all with a dynamically typed language.
the key is to not generate huge big spurts of code blocks but incremental, reviewable code blocks in dynamic typed languages. Also,
“LLM”s are a constantly updating product, the new DeepSeek coder model with GPT-o1 like reasoning and chain of thought just came out, and its a lot superior than LLM models from even 1 month ago, especially when trained on internal docs, and files, and synthetic LLM generated metadata on those files, all accessible under an effective hybrid search engine.
It’s perfectly usable with Dynamic languages like JS and Python, and perfectly doable to write bug free effective code, is one is restrained and thoughtful in its use, which builds up as muscle memory over time anyways.
Especially if one considers linters and type hints enabled in js (as ts) and python type hints, then it becomes even more doable, if you’re insistent on static typed only language code with LLM.
It's marginally better compared to Claude and costs 10x. O1 is not a solution.
I agree there is plenty of space for improvement on the IDEs side, they need to start fetching code of dependencies and documentation dynamically, get the relevant context and pass it in the prompt to the LLM, so that they can use real code and not what the model remembers from training a certain library does.