Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually, it's definitely NOT a sign of ANY intelligence at all ! It's only a sophisticated pattern matching on a (very huge) corpus... The major primary sin of "Artificial Intelligence" has been to use the word "intelligence" for something that has nothing to do about it

The real philosophical implications of chatgpt is not that a "program show intelligence" but "what do we call 'intelligence' and how do we recognize it?" And all chatGPT is showing is that humans are easily prone to BELIEVE to anything without thinking, because thinking is hard and humans are lazy. Religions of all kind explained since the beginning of Humanity what seemed to be "magic"...

As long as an AI program wont be able to really "create" something new - meaning different from any combination of what it has been fed - I dont think that you can call them "intelligent". It's only rearranging pieces of informations in different orders (with more or less accuracy for a human to give a sense to it). We're still using ELIZA program... just with more datas

But YMMV



You’re conflating the way it’s built, with the things it’s doing. Your argument could also be « neurons are just blobs of atoms exchanging electric signal, it’s not intelligence »

Yes, at the end it’s just a huge n-dimensional cube of numbers. But that’s my point : can human-level intelligence be built on top of that ? I was just as skeptical as you for this reason (and many others more important in my opinion, such that those machines don’t have any personal embodied experience of the information they manipulate, and as such meaning shouldn’t be able to emerge from them). But looking at the facts i have to admit i’m at least partially wrong.


Actually, a machine may be programmed to act like it understand something without understanding as we consider it usually (through a kind of intellectual model)

So the question might be: is the capacity to build "internal intellectual model" the sign of intelligence? Can we consider that attributing weight to AI neurons a "internal representation of an abstract model"? When we dump an trained AI model, can we consider this as an "abstract model construction"?


> a machine may be programmed to act like it understand something without understanding as we consider it usually

There's a (maybe subtle) point being implied in this statement: it assumes the idea that we'd be able to poke the AI in some way that makes sense to us that would demonstrate that the AI doesn't "understand".

As a trivial example, it's possible to have an extended conversation with chat gpt about ways to treat male pattern baldness, and then ask it "what do you think the likelihood is that I am balding?" and have chat gpt reply along the lines of "I have no knowledge of your physical body." (I made that example up based on previous conversations -- chat gpt might do better in real life)

But if chat gpt were also able to answer that question reasonably -- and every other question we might think to ask, all using its current (non-"understanding") method -- then can we even say that it doesn't "understand as we consider it usually" if we can't demonstrate that it doesn't?


Consider that your question is tantamount to the tautological "if it could understand us deeply and we couldn't prove otherwise then could it understand us deeply?"


Does this mean that the jokes are already in the training database, and ChatGPT just repeated them?

If not, how does it work exactly? Is there any difference between "a sophisticated pattern matching on a (very huge) corpus" and how a human would invent a joke?


I think the difference is that humans have underlying model of reality while Chat GPT, stable diffusion and such don't have any.

When you are an novice artist it's usually bad thing to rely on your mental model of reality. Children draw like that. Instead it's better to just observe patterns of light and darkness on the model and reproduce them. But at some point of that, when you start being creative with your art it really pays off to actually know how many fingers humans do have.

It doesn't mean neural networks are completely incapable of creating the model of reality, AlphaGo creates models of Go reality that's better than human players, they just can't do that through language alone.


There was some articles here about some people asking to ChatGPT to explain jokes... and it wasn't really convincing.

Surely there's some jokes in the training database... enought to allow generating "new ones" by switching pieces

The question about: are humans doing differently is the one about creation. Let's say that maybe 90% a "just" simple pattern matching (same mechanic as already existing jokes) and 10% is new mechanic (original joke)


> generating "new ones" by switching pieces

I think that's a bit of a simplification :-)


You forget that we ourselves are a large language model based on a large corpus. I suspect if you look closely at human intelligence you might come to the same conclusion.

You’re right we are prone to attributing intelligence to things that aren’t. But what actually is intelligence and is there anyplace you are sure it exists rather than a clever and thorough simulation?


Damn, someone better call up Mozart and let him know his compositions are canceled. Big Ego Olivie sentenced his non-creations as notes used elsewhere.


The emphasis should be on Artificial as in faux.


In what sense might mileage vary?


See comments above ;-)

There's some kind of philosophical debate about what creation is... Another debate might be "is creation the most specific sign of intelligence ?" Is there anything like "pure creation" or are all creation only adaptation from prior art ? And if so: when did the initial parts come from ? Is creation only the systematic exploration of a predefined space ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: