Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't code like human so you would expect it to be better at some kinds of tasks. It brute forces the problems by generating a million solutions and then tries to trim that down, a few problems might be vulnerable to that style of approach.


Are you sure? "brute forces the problems by generating a million solutions and then tries to trim that down" isn't how I would describe the way a LLM works.


The original AlphaCode paper in Nature explains the approach, they generate many potential solutions with the LLM and do a lot of processing after to select candidates. Here's where the probabilistic nature of LLMs hurts, I think.


That is how it works, read the paper.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: