Hacker News new | past | comments | ask | show | jobs | submit login

I was just complaining to my friend about how much trouble I'm having with it. I purchased the $20 GPT-Plus so I could use GPT-4 after reading someone on HN say that GPT-4 is "scary impressive" at writing code.

I have two tasks I wanted it to try, both making use of public APIs, starting from scratch. In short, it was frustrating as hell. Never-ending import problems -- I'd tell it the error, it'd give me a different way to import, only leading to a new import problem. I think I used up all my 100 queries in 4 hours of GPT-4 just on the import/library problem.

Then there were constant mis-use of functions -- ones that didn't exist, or didn't exist in the object it was using, but did exist in some other object instead, at which point it would apologize and fix it (why didn't you give it to me correct the first time, if you "know" the right one?)

The actual code it wrote seemed fine, but not what I'd call "scary impressive." It also kept writing the same code in many different styles, which is kind of neat, but I found one style I particularly liked and I don't know how to tell it to use that style.

Lastly, it's only trained up to Sep 2021, so all the APIs it knew were well behind. I did manage to tell it to use an updated version, and it seemed to oblige, but I don't really know if it's using it or not -- I still continued to have all the above problems with it using the updated API version.

Anyway, I hope MS fiddles with it and incorporates it into Visual Studio Code in some clever way. For now, I'll continue to play with it, but I don't expect great things.




I think the current train of thought is "keep increasing the size of the language model and you don't need to worry about integrating with LSPs".

Perhaps there is some merit to this. If the language model is large enough to contain the entirety of the documentation and the LSP itself, then why bother integrating with the LSP? _Especially_ if you can just paste the entirety of your codebase into the LLM.


> If the language model is large enough to contain the entirety of the documentation and the LSP itself, then why bother integrating with the LSP?

If your goal is to get a response to an LSP query, why on earth would you use an LLM trained on data where >99.9999% of that data has nothing to do with answering an LSP query?

Why would I switch out an LSP server of 100% accuracy for an LLM that’s slower and has lower accuracy?


> I'd tell it the error, it'd give me a different way to import, only leading to a new import problem

It's dataset is thousands of blogs posts and stack overflow questions about this very thing, of course the autocomplete engine is going to predict the next response be "another way of doing x".




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: