Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The problem here is that yes the occasional errors demonstrate certain flaws in it's understanding of some topic.

No, they demonstrate that the machine does not understand.

> The issue is there are many times where it produces completely novel and creative output that could not have existed in the training data and can only be formulated through complete and understanding of the query it was given.

What have you done to eliminate the possibility that it assembled the words algorithmically and the solution generated is something the reader constructed by assigning meaning to the text response? If the answer "can only be formulated through complete and understanding of the query it was given" then you must have eliminated this possibility.

> Understanding of the world around us is not developed through the lens of a singular model or a singular piece of understanding. We build multiple models of the world and we have varying levels of understanding of each model. It is the same with chatGPT. The remarkable thing about chatGPT understands a huge portion of these models really really well.

Where is the evidence that chatGPT understands a thing?

> There's literally no way it could do the above without understanding what you asked it to do. Read to the end. The end demonstrates awareness of self, relative to the context and task it was asked to perform.

Thats an interpretation of the text output that you assigned based on what the words in the text mean to you. I could just as easily say that Harry Potter is self-aware.

> Yet people illogically claim that for some other topic because chatGPT failed to correctly model the topic it therefore MUST be flawed in ALL of it's understanding of the world. This claim is not logical.

I don't think you understand what we're discussing.



>No, they demonstrate that the machine does not understand.

But it doesn't prove that the machine does not understand everything period. It just doesn't understand the topic or query at hand. It does no say anything about whether the machine can UNDERSTAND other things.

>What have you done to eliminate the possibility that it assembled the words algorithmically and the solution generated is something the reader constructed by assigning meaning to the text response? If the answer "can only be formulated through complete and understanding of the query it was given" then you must have eliminated this possibility.

This is easily done. The possibility is eliminated through the sheer number of possible compositions of assembled words. It assembled the words in a certain way that by probability can only indicate understanding.

>Where is the evidence that chatGPT understands a thing?

By composing words in a novel way that can only be done through understanding of a complex concept. But this composition of words or EVEN a close approximation of this composition CANNOT ever exist in another data set on the internet.

It takes one example of this for it to be proof that it understands.

>Thats an interpretation of the text output that you assigned based on what the words in the text mean to you. I could just as easily say that Harry Potter is self-aware.

No it's not. It's simply a composition of words that cannot be formulated without understanding. Harry Potter is obviously not self aware. But from the text of harry potter, WE can deduce that the thing that composed words to create Harry Potter understands what harry potter is. What composed the words to create Harry Potter? JK Rowling.

>I don't think you understand what we're discussing.

No it's just a sign of your own lack of understanding.


> But it doesn't prove that the machine does not understand everything period. It just doesn't understand the topic or query at hand. It does no say anything about whether the machine can UNDERSTAND other things.

No, the type of errors are indicative of a complete lack of understanding. That is the point. They are errors that a thinker with an incomplete understanding would never make. They are so garbled that not even a true believer such as yourself can find a way to shoehorn a possible interpretation of correctness into them; such that you are forced to admit that the machine is in error. Otherwise you and the other believers find an interpretation that fits and you conclude that the machine understands; revealing you yourself do not understand what 'understanding' really is.

> This is easily done. The possibility is eliminated through the sheer number of possible compositions of assembled words. It assembled the words in a certain way that by probability can only indicate understanding.

Thats nonsense. The machine assembles words in roughly the same probability that they occur in the training material. That is why it resembles sensible statements. The resemblance is superficial and exactly an artifact of this probability you find so compelling.

> By composing words in a novel way that can only be done through understanding of a complex concept.

You haven't eliminated the possibility of autopredict, merely ceased to consider it.

> Harry Potter is obviously not self aware.

There is more evidence for the sentience, self awareness, and understanding of concepts of Harry Potter than of chatGPT.


>No, the type of errors are indicative of a complete lack of understanding. That is the point. They are errors that a thinker with an incomplete understanding would never make. They are so garbled that not even a true believer such as yourself can find a way to shoehorn a possible interpretation of correctness into them; such that you are forced to admit that the machine is in error. Otherwise you and the other believers find an interpretation that fits and you conclude that the machine understands; revealing you yourself do not understand what 'understanding' really is.

No. You're wrong. chatGPT only knows of text. It derives incomplete understanding of the world via text. Therefore it understands some things and it understands others. It is clear chatGPT doesn't perceive things in the same way we do and it is clear the structure of its mind is different then ours so it clearly won't understand everything in the same way you understand it.

Why are you so stuck on this stupid concept? chatGPT doesn't understand everything. We know this. Humans don't understand everything we also know this. Answering a couple stupid questions wrong whether your human or chatGPT doesn't indicate that the human or chatGPT doesn't understand everything at all.

>You haven't eliminated the possibility of autopredict, merely ceased to consider it.

What in the hell is auto predict? Neural networks by definition are suppose to generate unmapped output if this is what you mean. 99 percent of output from neural networks is by definition unique from the training data.

>There is more evidence for the sentience, self awareness, and understanding of concepts of Harry Potter than of chatGPT.

This is a bad analogy. I'm not claiming sentience. My claim is that it understands you.


ChatGPT doesn't understand anything.

ChatGPT doesn't perceive anything.

ChatGPT does not have a mind.

People making claims otherwise are either 1) delusional or 2) fraudsters. (There is no option 3.) I'm not sure which is less bad and I'm frankly surprised that you've made these claims about ChatGPT 'understanding' and 'perceiving' and having a mind under what appears to be a real name account (and very aggressively too).


> No. You're wrong. chatGPT only knows of text. It derives incomplete understanding of the world via text.

It doesn't understand anything. It combines symbols according to a probabilistic algorithm and you assign meaning to it.

> Why are you so stuck on this stupid concept?

Because you keep replying with statements indicating you are yet to grasp it.

> This is a bad analogy. I'm not claiming sentience. My claim is that it understands you.

There is just as much evidence for sentience as understanding.


>It doesn't understand anything. It combines symbols according to a probabilistic algorithm and you assign meaning to it.

This is what the human brain does. I'm not assigning meaning to it. I am simply saying the algorithm is isomorphic to our definition of the word "understanding". No additional meaning.

>Because you keep replying with statements indicating you are yet to grasp it.

No no. What's going on here is I'm replying with statements to help YOU understand and you are repeatedly failing.

>There is just as much evidence for sentience as understanding.

Sentience is too fuzzy of a word to discuss. We can't even fully define it. Understanding is less fuzzy and more definable thus the question and claim for "understanding" is a much more practical query.

A human can be inconsistent and even lie. It does not mean the human does not understand you. Thus because your logic is applicable to humans it is akin to saying humans don't understand you. That is why your logic is incorrect.


> This is what the human brain does.

The human brain is embodied in a human flesh and uses language to exchange models and data about the real world with other fleshy vessels. This provides a basis to assign meaning to the language. Furthermore we know that humans understand to a greater or lesser extent because we are human and have insight into the human experience of language and reality.

These machine learning algorithms lack this fundamental basis for ascribing meaning to the symbolic tokens they deal with. Furthermore we lack the common experience for inferring meaning and understanding, we have to interpret from the output whether there is meaning and understanding on the machine's end. Without access to internal experience we must always harbor some doubt but given some level of nonsensical outputs we can say with confidence that there is no indication of understanding.

> A human can be inconsistent and even lie. It does not mean the human does not understand you. Thus because your logic is applicable to humans it is akin to saying humans don't understand you. That is why your logic is incorrect.

Like everyone else, I interpret statements from humans differently than statements from machines. This is because I know that humans and machines are different, and therefore the meaning assigned to the symbols involved is also different.


Flesh and understanding are separate concepts. The experience of being human is a separate concepts from understanding.

Everything in the universe has a set of rules governing it's existence. To understand something means that one can create novel answers to questions about something. Those answers however must make sense with the rules that govern the "something" at hand. This answer must also not be "memorized" in some sort of giant query-response lookup table.

That's it. That's what I'm saying.

For example if I ask chatGPT to emulate a bash terminal and create a new directory it can do so indicating it understands how a filesystem works. That is understanding.

I never said that LLMs are human. However understanding things is an aspect of being human and chatGPT captures a part of that aspect.


> Flesh and understanding are separate concepts. The experience of being human is a separate concepts from understanding.

The experience of being human is what allows me to infer meaning from the words, phrases, sentences, etc. that a human generates. This is what allows me to make the leap from text to understanding (or lack, or incomplete understanding, or confusion, or deception) with human-generated responses. This is what I have in that case of humans, which allows me to interpret their statements one way; and what I lack with machines, which means I have no basis for inferring understanding the same way I do with a human. If I was not human, I would not be able to infer meaning from the noises a human makes, except by observing correlations between those noises and their behavior. This is well understood in cognitive science and animal behavior.

> To understand something means that one can create novel answers to questions about something. Those answers however must make sense with the rules that govern the "something" at hand. This answer must also not be "memorized" in some sort of giant query-response lookup table.

chatGPT is functionally equivalent to a lookup table with randomization.

> For example if I ask chatGPT to emulate a bash terminal and create a new directory it can do so indicating it understands how a filesystem works. That is understanding.

It replies with a text output that is a probabilistic representation of the text that one might find on the internet in response to such a query. The emulation occurs in your mind when you read the response and assign meaning to the words and phrases it contains.

> However understanding things is an aspect of being human and chatGPT captures a part of that aspect.

You have not shown that chatGPT is anything different than a fancy lookup table with some randomization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: