Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An AI/person that can solve novel problems (either by teaching it or not) is a more general kind of intelligence than one that can not.

It's a a qualitatively better intelligence.

An intelligence that can solve problems that fall into its training set better is quantitatively better.

Likewise, an intelligence that learns faster is quantitatively better.

To give a concrete and simple example, take a simple network trained to recognized digits. The network can be of arbitrary quality, it can be robust or not, fast or slow, but it can't do more than digits.

Another NN that can learn to recognize more symbols is a more general kind of AI, which again introduces another set of qualitative measures, namely how much training does it need to learn a new symbol robustly.

'Intelligence' is a somewhat vague term as any of the previous measures I've defined could be called intelligence (training accuracy, learning speed, inference speed etc., coverage of training set).

You could claim a narrower kind of intelligence that exists without learning (which is what ChatGPT is and what you gave as example with the person that only has a short-term memory) is still intelligence, but then we are arguing semantics.

Inference only LLMs are clearly missing something and are lacking in generality.



They could be more general, sure, but this is all extremely far from his bar for classifying things as AI.

> To give a concrete and simple example, take a simple network trained to recognized digits. The network can be of arbitrary quality, it can be robust or not, fast or slow, but it can't do more than digits.

This is the kind of thing he would class as AI, but not LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: