Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well that's a great question, and an even more basic complaint I have with the AI research space. They have yet to bother coming up with a clear way to define or recognize intelligence or consciousness.


I've always thought intelligence = ability to solve problems. And consciousness to be a completely separate thing.


Those are interesting starting points. I don't know if I'd say its that simply, but Tue direction seems totally reasonable.

The ability to solve problems is a particularly interesting one. To me there's a difference between brute forcing or pattern recognition and truly solving a problem (I don't have a great definition for that!). If that's the case, how do we really recognize which one an LLM or potential AI is doing?

It'd be a huge help if AI researchers put more focus on the interoperability problem before developing systems that could reasonably emerge intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: