Hacker News new | past | comments | ask | show | jobs | submit login

I have the same observation, looks like LLMs are highly biased to add complexity to solve problems: for example add explicit handling of the edge-cases I pointed out rather than rework the algorithm to eliminate edge-cases altogether. Almost everytime it starts with something that's 80% correct, then iterate into something that's 90% correct while being super complex, unmaintainable and having no chance to ever cover the last 10%



Unfortunately this is my experience as well, to the point where I can't trust it with any technology that I'm not intimately familiar with and can thoroughly review.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: