Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs follow instructions.

They don't

> Garbage in = garbage out generally.

Generally, this statement is false

> When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well.

Keyword: can.

They can also not perform really well despite all the management and materials.

They can also work really well with loosey-goosey approach.

The reason is that they are non-deterministic systems whose performance is affected more by compute availability than by your unscientific random attempts at reverse engineering their behavior https://dmitriid.com/prompting-llms-is-not-engineering




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: