Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As long LLMs are seen as patterned knowledge aggregator then work as intended. They hallucinate answers mostly because they try to interpolate data or there is some pattern that it didn't catch due to lack of context or training.

They're really good for type / name finding and boiler plate generation. For larger suggestions as your pointed out they're too wrong to be used as is. They can give good ideas, especially if guided with comments, but usually at this stage I just use Phind.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: