I asked my daughter this morning: What is a "promise"?
You have an idea, and I have an idea, they probably both are something kind-of-like "a statement I make about some action I'll perform in the future". Many, many 5 year olds can give you a working definition of what a promise is.
Which animal has a concept of a promise anywhere close to yours and mine?
Which AI program will make a promise to you? When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise? Will it de-prioritize non-obligations for the sake of keeping its promise? Will it learn that it can only break its promises so many times before humans will no longer trust it when it makes a new promise?
A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us. If we picked a different word (or didn't have a word in English at all) the fundamental concept wouldn't change. If you had never encountered a promise before and someone broke theirs to you, it would still feel bad. Certainly, you could recognize the patterns involved as well, but the promise isn't merely the pattern being recognized.
A rose, by any other name, would indeed smell as sweet.
The word you are looking for is an _embedding_. Embeddings are to language models as internal, too-rich-to-be-fully-described conceptions of ideas are to human brains. That's how language models can translate text: they have internal models of understanding that are not tied down to languages or even specific verbiage within a language. Probably similar activations are happening between two language models who are explaining what a "promise" means in two different languages, or two language models who are telling different stories about keeping your promise. This is pattern recognition to the same extent human memory and schemas are pattern recognition, IMO.
Edit:
And for the rest of your post:
> Which AI program will make a promise to you? When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise? Will it de-prioritize non-obligations for the sake of keeping its promise? Will it learn that it can only break its promises so many times before humans will no longer trust it when it makes a new promise?
All of these questions are just as valid posed against humans. Our intra-species variance is so high with regards to these questions (whether an individual feels remorse, acts on it, acts irrationally, etc.), that I can't glean a meaningful argument to be made about AI here.
I guess one thing I want to tack on here is that the above comparison (intra-species variance/human traits vs. AI traits) is so oft forgotten about, that statements like "ChatGPT is often confident but incorrect" are passed off as meaningfully demonstrating some sort of deficiency on behalf of the AI. AI is just a mirror. Humans lie, humans are incorrect, humans break promises, but when AI does these things, it's indicted for acting humanlike.
> That's how language models can translate text: they have internal models of understanding that are not tied down to languages or even specific verbiage within a language
I would phrase that same statement slightly differently:
"they have internal [collections of activation weightings] that are not tied down to languages or even specific verbiage within a language"
The phrase "models of understanding" seems to anthropomorphize the ANN. I think this is a popular way of seeing it because it's also popular to think of human beings as being a collection of neurons with various activation weightings. I think that's a gross oversimplification of humans, and I don't know that we have empirical, long-standing science to say otherwise.
> This is pattern recognition to the same extent human memory and schemas are pattern recognition, IMO.
Maybe? Even if the embedding and the "learned features" in an ANN perfectly matched your human expectations, I still think there's a metaphysical difference between what's happening. I don't think we'll ever assign moral culpability to an ANN the way we will a human. And to the extent we do arm ChatGPT with the ability to harm people, we will always hold the humans who did the arming as responsible for the damage done by ChatGPT.
> All of these questions are just as valid posed against humans. Our intra-species variance is so high with regards to these questions (whether an individual feels remorse, acts on it, acts irrationally, etc.), that I can't glean a meaningful argument to be made about AI here.
The intra-species variance on "promise" is much, much lower in the mean/median. You may find extremes on either end of "how important is it to keep your promise?" but there will be wide agreement on what it means to do so, and I contend that even the extremes aren't that far apart.
> Humans lie, humans are incorrect, humans break promises, but when AI does these things, it's indicted for acting humanlike.
You don't think a human who tried to gaslight you that the year is currently 2022 would be indicted in the same way that the article is indicting ChatGPT?
The reason the discussion is even happening is because there's a huge swath of people who are trying to pretend that ChatGPT is acting like a human. If so, it's either acting like a human with brain damage, or it's acting like a malevolent human. In the former case we should ignore it, in the latter case we should lock it up.
> When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise?
It will if you condition it to do so. Or at least it will say that it does feel bad or good, but then with humans you also have to take their outputs as accurate reflection of the internal state.
Conversely, there are many humans who don't feel bad about breaking promises.
> Will it de-prioritize non-obligations for the sake of keeping its promise?
It will you manage to convey this part of what a "promise" is.
> A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.
This is not a dichotomy. "Promise" is a word that stands for the concept, but how did you learn what the concept is? I very much doubt that your first exposure was to a dictionary definition of "promise"; more likely, you've seen persons (including in books, cartoons etc) "promising" things, and then observed what this actually means in terms of how they behaved, and then generalized it from there. And that is pattern matching.
GPT will never make a promise to you in the same sense that I would make a promise to you.
We could certainly stretch the meaning of the phrase "ChatGPT broke its promise to me" to mean something, but it wouldn't mean nearly the same thing as "my brother broke his promise to me".
If I said to you "Give me a dollar and I will give you a Pepsi." and then you gave me the dollar, and then I didn't give you a Pepsi, you would be upset with me for breaking my promise.
If you put a dollar in a Pepsi vending machine and it doesn't give you a Pepsi, you could say, in some sense that the vending machine broke its promise to you, and you could be upset with the situation, but you wouldn't be upset with the vending machine in the same sense and for the same reasons as you would be with me. I "cheated" you. The vending machine is broken. Those aren't the same thing. It's certainly possible that the vending machine could be setup to cheat you in the same sense as I did, but then you would shift your anger (and society would shift the culpability) to the human who made the machine do that.
ChatGPT is much, much, much closer to the Pepsi machine than it is to humans, and I would argue the Pepsi machine is more human-like in its promise-making ability than ChatGPT ever will be.
> there are many humans who don't feel bad about breaking promises.
This is an abnormal state for humans, though. We recognize this as a deficiency in them. It is no deficiency of ChatGPT that it doesn't feel bad about breaking promises. It is a deficiency when a human is this way.
> > Will it de-prioritize non-obligations for the sake of keeping its promise?
> It will you manage to convey this part of what a "promise" is.
I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.
It will be able to talk about deprioritizing non-obligations before then, for sure. But it will have no sense or awareness of what that means unless and until it is programmed to do so.
> > A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.
> This is not a dichotomy.
You missed the word "merely". EITHER a promise is merely pattern recognition (I saw somebody else say the words "Give me a dollar and I'll give you a cookie" and I mimicked them by promising you the Pepsi, and if I don't deliver, I'll only feel bad because I saw other people feeling bad) OR a promise is something more than mere mimicry and pattern matching and when I feel bad it's because I've wronged you in a way that devalues you as a person and elevates my own needs and desires above yours. Those are two different things, thus the dichotomy.
> GPT will never make a promise to you in the same sense that I would make a promise to you.
It's a meaningless claim without a clear definition of "same sense". If all observable inputs and outputs match, I don't see why it shouldn't be treated as the same.
> This is an abnormal state for humans, though. We recognize this as a deficiency in them.
We recognize it as a deficiency in their upbringing. A human being that is not trained about what promises are and the consequences of breaking them is not any less smart than a person who keeps their promises. They just have different social expectations. Indeed, humans coming from different cultures can have very different feelings about whether it's okay to break a promise in different social contexts, and the extent to which it would bother them.
> I contend that it will refuse to make promises unless and until it is "manually" programmed by a human to do so. That is the moment at which this part of a promise will have been "conveyed" to it.
If by manual programming you mean telling it, I still don't see how that is different from a human who doesn't know what a promise is and has to learn about it. They'll know exactly as much as you'll tell them.
> Pattern recognition is not intelligence.
Until we know how exactly our own intelligence work, this is a statement of belief. How do you know that the function of your own brain isn't always reducible to pattern recognition?
> Until we know how exactly our own intelligence work, this is a statement of belief.
I would agree, with the addendum that it logically follows from the axiomatic priors of my worldview. My worldview holds that humans are qualitatively different from every animal, and that the gap may narrow slightly but will never be closed in the future. And one of the more visible demonstrations of qualitative difference is our "intelligent" approach to the world around us.
That is, this thread is 2 humans discussing whether the AI some other humans have made has the same intelligence as us, this thread is not 2 AIs discussing whether the humans some other AIs have made has the same intelligence as them.
> How do you know that the function of your own brain isn't always reducible to pattern recognition?
I am a whole person, inclusive of my brain, body, spirit, past experiences, future hopes and dreams. I interact with other whole people who seem extremely similar to me in that way. Everywhere I look I see people with brains, bodies, spirits, past experiences, future hopes and dreams.
I don't believe this to be the case, but even if (as you say) all of those brains are "merely" pattern recognizers, the behavior I observe in them is qualitatively different than what I observe in ChatGPT. Maybe you don't see it that way, but I bet that's because you're not seeing everything that's going into the behavior of the people you see when you look around.
As one more attempt to show the difference... are you aware of the Lyrebird?
The lyrebird can mimic the sounds of its environment in an uncanny way. There are certain birds in the New England National Park in Australia which have been found to be carrying on the tune of a flute that was taught to a pet lyrebird by its owner in the 1930s[0]. I think we could both agree that that represents pure, unadulterated, pattern recognition.
Now if everyone went around the internet today saying "Lyrebirds can play the flute!" can you agree that there would be a qualitative difference between what they mean by that, and what they mean when they say "My sister can play the flute!"? Sure, there are some humans who play the flute better (and worse!) than my sister. And sure, there are many different kinds of flutes, so maybe we need to get more specific with what we mean when we say "flute". And sure, if you're just sitting in the park with your eyes closed, maybe you can't immediately tell the difference between my sister's flute playing and the lyrebird's. But IMO they are fundamentally different in nature. My sister has hands which can pick up a flute, a mouth which can blow air over it, fingers which can operate the keys, a mind which can read sheet music, a will which can decide which music to play, a mood which can influence the tone of the song being played, memories which can come to mind to help her remember her posture or timing or breathing technique or muscle memory.
Maybe you would still call what my sister is doing pattern recognition, but do you mean that it's the same kind of pattern recognition as the lyrebirds?
And to your other point, do you need to perfectly understand exactly how human intelligence works in order to answer the question?
> A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us.
It's probably even stronger than that: e.g. a promise is still a promise even if we're just brains in a vat and can be kept or broken even just in your mind (do you promise to think about X?—purely unverifiable apart from the subject of the promise, yet we still ascribe moral valence to keeping or breaking it).
I asked my daughter this morning: What is a "promise"?
You have an idea, and I have an idea, they probably both are something kind-of-like "a statement I make about some action I'll perform in the future". Many, many 5 year olds can give you a working definition of what a promise is.
Which animal has a concept of a promise anywhere close to yours and mine?
Which AI program will make a promise to you? When it fails to fulfill its promise, will it feel bad? Will it feel good when it keeps its promise? Will it de-prioritize non-obligations for the sake of keeping its promise? Will it learn that it can only break its promises so many times before humans will no longer trust it when it makes a new promise?
A "promise" is not merely a pattern being recognized, it's word that stands in for a fundamental concept of the reality of the world around us. If we picked a different word (or didn't have a word in English at all) the fundamental concept wouldn't change. If you had never encountered a promise before and someone broke theirs to you, it would still feel bad. Certainly, you could recognize the patterns involved as well, but the promise isn't merely the pattern being recognized.
A rose, by any other name, would indeed smell as sweet.