Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Michael I. Jordan is spot on. We have NO artificial intelligent systems, for any sensible definition of "intelligent". None. ZERO.

We have "big data". We have (brittle and rather poor) "pattern recognition". We have (very limited) "simulate a conversation". But we do not even know what "intelligence" really means!

We (the industry) should recognise "AI" as term that has become essentially meaningless. "AI" is now nothing but a marketing splat, like "New and Improved" or "Low Fat".



> We have "big data". We have (brittle and rather poor) "pattern recognition". We have (very limited) "simulate a conversation".

Yes, yes, yes, exactly.

> We have NO artificial intelligent systems, for any sensible definition of "intelligent". None. ZERO.

Yes, though -- what are some of your sensible definitions of intelligence?

> But we do not even know what "intelligence" really means!

...oh. I mean, you're not wrong, we don't know. But then how can you argue that AI isn't "intelligent"?

What if human "intelligence" really just is pattern recognition too? With maybe some theory of mind, executive function, and "reasoning" -- Mike complains machines can't do thinking in the sense of "high level" reasoning, though one could argue they just don't have enough training data here.

And everything else is emergent?

Then we're maybe not as super far off.

I'm reminded of the Arthur C. Clarke quote [0]:

> If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.

[0] https://www.scientificamerican.com/article/remembering-sir-a...


We have, they kinda suck so far. Look up DeepMind attempts at DDQN game attacks where said AI develops new strategies for an entirely new game. And attempts to solve Montezuma's Revenge and other Atari classics, both by DeepMind and OpenAI. Both of the systems are somewhat adaptable to new problems too. There also is Uber's Go-Explore and RMT's.

These are closest we came to intelligence. They deal with big unobservable state and novelty with few shot learning, few objectives and sparse reward. They haven't quite cracked automated symbolization. (The AIs do not quite create a complete symbolic representation of the game.)

I recommend following AAAI conferences for more details.


I've been having "AI" debates like this for about 10 years now, and I think they usually go in 1 of 2 directions:

1. We don't know what intelligence is 2. AI can never be intelligent because humans are special (in various ways)

Of the two, I think that 1 is the more compelling to talk about. Let's look at state of the art Large Language Models (GPT, BERT, BART, T5, etc.) Everyone claims that they can't be intelligent because they're just cleverly predicting the next tokens. The most common failure mode of this is that they hallucinate - if you ask them to do something for you, they'll get it wrong in a way that kind of makes sense. There are some other more subtle problems as well like common sense reasoning, negation, and factuality. We could say that because of these problems they are not "intelligent". But why is that so important? Can we say with certainty that human intelligence is more than just patterned IO? If it is just highly tuned patterned IO with the environment, perhaps we have discovered intelligent systems, but they're handicapped because they're limited in their sensory perception (read: data modalities). And perhaps by combining several of these models in clever ways, we will end up with an architecture for pattern IO that is indistinguishable from human intelligence.

The naysayers claim that this won't work because we'll still end up with mere pattern prediction machines. But this starts to look like a "humans are special" argument.


Well, it will be interesting to see how this develops in the future. At some point we will have systems powerful enough to process and learn in real time, also using sensors that are equivalent of human senses (or even more powerful). At this point, if we can successfully model and mimic a typical human, why should it matter if it's not a human?

As for the hallucinating point, I remember a funny story. I once tripped on the curb and fell down; my foot ached for a week. My then 4-year-old daughter took her first-aid set for dolls and tried to "cure" my foot. My mother heard the story and found it cute, so she asked my daughter: "Will you cure me like that, too?" My daughter seemed stupefied and answered: "Are you going to trip and fall, grandma?"

My feeling is that the missing links will be found one day and the AI of the future will be able to apply more adult-like "reasoning."


Does it even matter "what intelligence is"? Much like "life" [0], the difficulty seems to be coming from being unable to define it, rather than "finding" it. There are multiple ways it can be defined, based on a bunch of different properties, and each definition delivers different outlooks.

Similar to "life", we use "intelligence" in everyday speech without specifying which definition we mean. I don't think that's going to change – it's just as unproductive to limit "life" to a single definiton (what about viruses? unconsciousness? ecosystems?) as it would be with "intelligence" (pets? ants? being able to converse with a human? showing reasoning? creativity?).

But that also means that the popular term "AI" will never be precise.

[0] https://www.quantamagazine.org/what-is-life-its-vast-diversi...


As a materialist in matters of the mind, I regard proposition 2 to be an unverifiable belief of those who hold it, but I also regard proposition 1 as being simply a statement of how things currently are: at this point, we do not, in fact, know what intelligence is.

To say that it is "just" highly tuned patterned IO with the environment would be so broad as to be meaningless; all the explanation is being brushed away by that "just", and in the current state of research, no-one has either demonstrated AI or explained intelligence with sufficient specificity for this to be a clearly true synopsis of our knowledge.

You are not quite, however, asserting that to be so, you simply posed the question of whether it is so. In so doing, you are shifting the burden of proof, and proposition 1 stands until someone settles the issue by presenting a testable - and tested - theory of intelligence (note that I wrote of intelligence, not about intelligence; we have plenty of the latter that do not rise to the level of being the former.)

My attitude to the current crop of models is that they demonstrate something interesting about the predictability of everyday human language, but not enough to assume that simply more of the same (or something like it) will amount to AI - we seem to be missing some important parts of the puzzle. If a language model can come up with a response coherently explaining why I am mistaken in so thinking, then I will agree that AI has been achieved.


Correct. Apparently my phone has "AI" because it recognises a flower as a flower and applies a colour filter and background blur when I use my camera. This is not AI.

By the same extension of logic, any program that recognises input data and performs some form of pre-programmed logic is AI. ie. any computer program?


When young children start recognizing shapes, animals, colors, etc, you don’t consider that a sign of intelligence? What is it a sign of then?


I certainly consider that exemplar child intelligent. I'm happy to consider my pet dogs intelligent - some more than others:). And they are all leagues ahead of any "artificial" systems we've got.

And sorry, I don't have a definition of intelligence - but that's exactly the point. However I would require any definition of intelligence to include flexibility, self-awareness, world-modelling, curiousity, and goal setting.

Flexibility is surely one of the things that distinguishes a chess machine (or chatbot, or image recognition) from a child or a dog - the latter recognise and adapt to new situations & environments. Self-awareness seems a requirement for own goal-setting. Curiousity and world-modelling go together, and world-modelling is presumably required for exploring own goal-setting (a random walk is not intelligence).

All these things are so many worlds distant from google lens, or Big Blue.


A three year old child is already generally far beyond what any software can do.


Recognizing a flower is absolutely AI.

https://xkcd.com/1425/

ok - or a bird.



If that is intelligent behavior then literally any physical process is "intelligence" embodied, though the magnitude or intensity of this intelligence obviously varies based on what strictly is happening.

This is because anything that happens in reality is computable, and you have described a straightforward computation as "intelligence".

I actually happen to sincerely adhere to this perspective, as a particular flavor of panpsychism.


A battery discharging does not recognise flowers. The sun does not recognise flowers. I do not create an excess of energy by splitting atoms, these things are not equivalent at all levels of abstraction.


Of course not, and it is silly to try to paint my argument as trying to claim that. A battery discharging is not very intelligent, but the GGP implies that this exists as a gradient down to the lowest levels of physical dynamics.

Put another way, the complexity of the computation determines the complexity of the result. The sun+flower+ground system "recognizes" a flower by means of striking the flower and the surrounding area with photons and "recording" the result as the shadow.


'AI' as a term has been used by people in the industry for decades even by early computer science pioneers reffering to incredibly simple applications - it was hollywood that appropriated the term to the likes of skynet - not the other way around.


Yeah, but my hot new startup is real AI.

Now where is my funding?


Check out Convolutional Neural Networks. They learn from example images, progressively improving as you train it more, and you can see that the deeper the level, the more abstract the recognition becomes, going from simple shapes and edges to full on eyes, a figure of a person or a car, etc in deeper layers. It's absolutely learned intelligence, not to be confused with sentience.

Remember, a critical part of human intelligence is pattern recognition. If you dismiss pattern recognition as not intelligence, you're dismissing a fundamental part of what makes humans intelligent. It's no different than an insect with the intelligence to recognize predators.


"a critical part of human intelligence is pattern recognition" almost certainly true. But a wheel is a critical part of a car, but a wheel is not a car.


That's why I mentioned simpler intelligence like that in insects, where recognizing a predator is still a form of intelligence, even if it's rather crude and very analogous to current ML capabilities.


I think the technical side of the industry has known this all along. AI is a dream to pursue.

The business/marketing side of the industry has doubled down on the term. Many industries outside have adopted it as a way to make their boring widget sound new and interesting.

I bought a TV with built in AI recently. It’s just a TV to me. I’m sure it has some algorithms but that word is old and does not sound premium anymore.

Whenever I see an AI startup, I mostly am expecting its really just some guy doing things that don’t scale, like manning a chat bot or something.


I wonder what the breakthrough will be is it hardware or software. Seems like we can make as powerful of a computer as we want/try but what makes sentience. Does a fly or gnat have sentience?


I know I shouldn't be so pedantic, but you probably don't mean sentience but Sapience [0]. Sentience is the ability to sense and react to stimuli. A fly or a gnat is certainly sentient; they can see, smell, feel the sensation of touch and react accordingly. That is all that is required for a being to be sentient. A really interesting example is if you shock a caterpillar even after metamorphosis the resultant moth remembers the experience and reacts accordingly [1].

Although it is pedantic, it is also an important distinction, sentience and sapience exist on a spectrum. At one end you might have Gnats as purely sentient beings, humans always claim themselves as fully Sapient, so much so we named our species Homo Sapiens.

Different species exist somewhere on this spectrum and where a particular species ends up is subjective. Many people would put Whales and Dolphins [2], and often dogs and cats, further towards the Sapient end of the spectrum (vegans would probably push most mammals towards the sapient end), with insects remaining simply sentient (even for many vegans).

As humans we seem to have an almost inbuilt understanding that not all species are capable of feeling the same way we do, but when you look at the animals we seek to protect and those we don't, what you find is that the less human the species the less we care for the well being of a particular specimen of that species; we care most about the suffering of mammals (more so for the larger ones that the small), and least about the suffering of fish or insects or spiders.

I'd argue that our inbuilt understanding of where an animal fits on the sentient-sapient spectrum is simply how easy it is for us as humans to imagine the plight of a specimen of that species.

What Is It Like to Be a Bat? [3] is an excellent paper on this subject, it argues that we as humans can never fully understand what it is to be a bat, we can imagine what it is like to fly, or echolocate, but that will never be the same as the bats perspective.

From where I'm sitting computers are already sentient, they can sense their environment and react accordingly, self-driving cars are an excellent example, but so is the temperature sensor in my greenhouse that opens a window to keep itself cool; it is sensing the temperature of the air and reacting accordingly.

I in no way believe that my temperature sensor has any sapient qualities, It can't reason about why it's reacting, it can simply react. I don't believe that as the temperature passes the 'open window' threshold that the system recognises the signal as pain. But the same is true of the fly. If I pull a wing off a fly, I know it senses the damage, but does it really feel it as pain?

When considering my cat, if I step on it's tail, I'm sure it feels pain, but is that true or does it simply react in a way that I as a human consider an appropriate reaction to pain.

I can't ever truly understand how my cat feels when I stand on her tail, just as I can't truly know that the fly isn't trying to scream out in horror and pain at what I've just done to it.

It is because of our subjectivity to the placement of animals on the sentient-sapient spectrum and our inability to every fully appreciate the experience of another that I am convinced even if we did create a sapient machine it's experience would be so far removed from our own we would fail to recognise it as such.

The problem with this rabbit hole is, firstly I might convince myself that eating meat is wrong, and well I like bacon too much for that, and the second is that you'll quickly end up in the philosophical territory of I, Robot:

"There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?" [4].

[0] https://grammarist.com/usage/sentience-vs-sapience/ [1] https://www.newscientist.com/article/dn13412-butterflies-rem... [2] https://uk.whales.org/whale-culture/sentient-and-sapient-wha... [3] https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/hum... [4] https://www.imdb.com/title/tt0343818/characters/nm0000342


Yeah by sentience I did mean more than sensing. As long as Sapience doesn't imply human then I agree. Just about awareness, real awareness... which I don't know what that means, an IMU is aware right, well it's a sensor.

> named our species Homo Sapiens

I see

> I in no way believe that my temperature sensor has any sapient qualities, It can't reason about why it's reacting

Right like the training

> recognises the signal as pain

yeah that's something else too, I know there are concepts like word2vec but still, how do you have meaning

> even if we did create a sapient machine it's experience would be so far removed from our own

Yeah maybe a sociopath machine

That was a great movie


You said it man, spot on man (͡• ͜ʖ ͡•)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: