GPT will never make a promise to you in the same sense that I would make a promise to you.
We could certainly stretch the meaning of the phrase "ChatGPT broke its promise to me" to mean something, but it wouldn't mean nearly the same thing as "my brother broke his promise to me".
If I said to you "Give me a dollar and I will give you a Pepsi." and then you gave me the dollar, and then I didn't give you a Pepsi, you would be upset with me for breaking my promise.
If you put a dollar in a Pepsi vending machine and it doesn't give you a Pepsi, you could say, in some sense that the vending machine broke its promise to you, and you could be upset with the situation, but you wouldn't be upset with the vending machine in the same sense and for the same reasons as you would be with me. I "cheated" you. The vending machine is broken. Those aren't the same thing. It's certainly possible that the vending machine could be setup to cheat you in the same sense as I did, but then you would shift your anger (and society would shift the culpability) to the human who made the machine do that.
ChatGPT is much, much, much closer to the Pepsi machine than it is to humans, and I would argue the Pepsi machine is more human-like in its promise-making ability than ChatGPT ever will be.
> there are many humans who don't feel bad about breaking promises.
This is an abnormal state for humans, though. We recognize this as a deficiency in them. It is no deficiency of ChatGPT that it doesn't feel bad about breaking promises. It is a deficiency when a human is this way.
> > Will it de-prioritize non-obligations for the sake of keeping its promise?
> It will you manage to convey this part of what a "promise" is.
I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.
It will be able to talk about deprioritizing non-obligations before then, for sure. But it will have no sense or awareness of what that means unless and until it is programmed to do so.
> > A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.
> This is not a dichotomy.
You missed the word "merely". EITHER a promise is merely pattern recognition (I saw somebody else say the words "Give me a dollar and I'll give you a cookie" and I mimicked them by promising you the Pepsi, and if I don't deliver, I'll only feel bad because I saw other people feeling bad) OR a promise is something more than mere mimicry and pattern matching and when I feel bad it's because I've wronged you in a way that devalues you as a person and elevates my own needs and desires above yours. Those are two different things, thus the dichotomy.
> GPT will never make a promise to you in the same sense that I would make a promise to you.
It's a meaningless claim without a clear definition of "same sense". If all observable inputs and outputs match, I don't see why it shouldn't be treated as the same.
> This is an abnormal state for humans, though. We recognize this as a deficiency in them.
We recognize it as a deficiency in their upbringing. A human being that is not trained about what promises are and the consequences of breaking them is not any less smart than a person who keeps their promises. They just have different social expectations. Indeed, humans coming from different cultures can have very different feelings about whether it's okay to break a promise in different social contexts, and the extent to which it would bother them.
> I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.
If by manual programming you mean telling it, I still don't see how that is different from a human who doesn't know what a promise is and has to learn about it. They'll know exactly as much as you'll tell them.
> Pattern recognition is not intelligence.
Until we know how exactly our own intelligence work, this is a statement of belief. How do you know that the function of your own brain isn't always reducible to pattern recognition?
> Until we know how exactly our own intelligence work, this is a statement of belief.
I would agree, with the addendum that it logically follows from the axiomatic priors of my worldview. My worldview holds that humans are qualitatively different from every animal, and that the gap may narrow slightly but will never be closed in the future. And one of the more visible demonstrations of qualitative difference is our "intelligent" approach to the world around us.
That is, this thread is 2 humans discussing whether the AI some other humans have made has the same intelligence as us, this thread is not 2 AIs discussing whether the humans some other AIs have made has the same intelligence as them.
> How do you know that the function of your own brain isn't always reducible to pattern recognition?
I am a whole person, inclusive of my brain, body, spirit, past experiences, future hopes and dreams. I interact with other whole people who seem extremely similar to me in that way. Everywhere I look I see people with brains, bodies, spirits, past experiences, future hopes and dreams.
I don't believe this to be the case, but even if (as you say) all of those brains are "merely" pattern recognizers, the behavior I observe in them is qualitatively different than what I observe in ChatGPT. Maybe you don't see it that way, but I bet that's because you're not seeing everything that's going into the behavior of the people you see when you look around.
As one more attempt to show the difference... are you aware of the Lyrebird?
The lyrebird can mimic the sounds of its environment in an uncanny way. There are certain birds in the New England National Park in Australia which have been found to be carrying on the tune of a flute that was taught to a pet lyrebird by its owner in the 1930s[0]. I think we could both agree that that represents pure, unadulterated, pattern recognition.
Now if everyone went around the internet today saying "Lyrebirds can play the flute!" can you agree that there would be a qualitative difference between what they mean by that, and what they mean when they say "My sister can play the flute!"? Sure, there are some humans who play the flute better (and worse!) than my sister. And sure, there are many different kinds of flutes, so maybe we need to get more specific with what we mean when we say "flute". And sure, if you're just sitting in the park with your eyes closed, maybe you can't immediately tell the difference between my sister's flute playing and the lyrebird's. But IMO they are fundamentally different in nature. My sister has hands which can pick up a flute, a mouth which can blow air over it, fingers which can operate the keys, a mind which can read sheet music, a will which can decide which music to play, a mood which can influence the tone of the song being played, memories which can come to mind to help her remember her posture or timing or breathing technique or muscle memory.
Maybe you would still call what my sister is doing pattern recognition, but do you mean that it's the same kind of pattern recognition as the lyrebirds?
And to your other point, do you need to perfectly understand exactly how human intelligence works in order to answer the question?
GPT will never make a promise to you in the same sense that I would make a promise to you.
We could certainly stretch the meaning of the phrase "ChatGPT broke its promise to me" to mean something, but it wouldn't mean nearly the same thing as "my brother broke his promise to me".
If I said to you "Give me a dollar and I will give you a Pepsi." and then you gave me the dollar, and then I didn't give you a Pepsi, you would be upset with me for breaking my promise.
If you put a dollar in a Pepsi vending machine and it doesn't give you a Pepsi, you could say, in some sense that the vending machine broke its promise to you, and you could be upset with the situation, but you wouldn't be upset with the vending machine in the same sense and for the same reasons as you would be with me. I "cheated" you. The vending machine is broken. Those aren't the same thing. It's certainly possible that the vending machine could be setup to cheat you in the same sense as I did, but then you would shift your anger (and society would shift the culpability) to the human who made the machine do that.
ChatGPT is much, much, much closer to the Pepsi machine than it is to humans, and I would argue the Pepsi machine is more human-like in its promise-making ability than ChatGPT ever will be.
> there are many humans who don't feel bad about breaking promises.
This is an abnormal state for humans, though. We recognize this as a deficiency in them. It is no deficiency of ChatGPT that it doesn't feel bad about breaking promises. It is a deficiency when a human is this way.
> > Will it de-prioritize non-obligations for the sake of keeping its promise?
> It will you manage to convey this part of what a "promise" is.
I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.
It will be able to talk about deprioritizing non-obligations before then, for sure. But it will have no sense or awareness of what that means unless and until it is programmed to do so.
> > A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.
> This is not a dichotomy.
You missed the word "merely". EITHER a promise is merely pattern recognition (I saw somebody else say the words "Give me a dollar and I'll give you a cookie" and I mimicked them by promising you the Pepsi, and if I don't deliver, I'll only feel bad because I saw other people feeling bad) OR a promise is something more than mere mimicry and pattern matching and when I feel bad it's because I've wronged you in a way that devalues you as a person and elevates my own needs and desires above yours. Those are two different things, thus the dichotomy.
Pattern recognition is not intelligence.