Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1. If the subtext is "not intelligence as we know it, but something much inferior": that may or may not be true, but crapness at spelling puzzles isn't much evidence for it.

I'm not making a judgement call on whether it is or isn't intelligence, just that it's not like any sort of intelligence we've ever observed in man or beast.

To me, LLMs feels more like "A tool with built-in knowledge" rather than "A person who read up on the specific subject"

I know that many people use the analogy of coding LLMs as "An eager junior engineer", but even eager junior engineers only lack knowledge. They can very well come up with something that they've never seen before. In fact, it's common for them to reinvent a code method or code mechanism that they've never seen before.

And that's only for coding, which is where 99.99% of LLM usage falls today.

This is why I say it's not intelligence as we define it, but it's certainly something even if it is not an intelligence we recognise.

It's not unintelligent, but it's not intelligent either. It's something else.



Sure. But all those things you just said are about the AI systems' ability to come up with new ideas versus their knowledge of existing ones. And that doesn't have much to do with whether or not they're good at simple spelling puzzles.

(Some of the humans I know who are worst at simple spelling puzzles are also among the best at coming up with good new ideas.)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: