Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For now, the hallucination problem is objectively only getting worse with reasoning models, and there is no solution in sight.

You pulled this statement out of your ass. Objectively? We have baseline quantitative tests that say the opposite? LLMS are doing better on tests and it’s been improving. Where did your “objective” statement come from? Anecdotes? Quotations?

> Technical skills will matter for the simple reason that someone will always need to supervise the output of the AI.

Humans will never be so stupid as to lose all technical skill. In the beginning there needs to be mild technical skill at most so the human can somewhat understand what the AI is doing. But as trust grows the human will understand less and less of it. The human is like a desperate micro manager attempting to cling to understanding but that inevitably eventually erodes completely.



For commentary on the hallucination problem, please see https://news.ycombinator.com/item?id=43942800 and the associated news link. I have observed it myself in practice with o3 compared to o1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: