Now write a dialog between Mary and David discussing the dialog between Alice and Bob.
----
Mary: David, after listening to Alice and Bob's conversation, it seems to me that Alice is empathizing with John's intentions, while Bob is more focused on Jane's feelings.
David: I see what you mean, Mary. But don't you think Bob was right to emphasize the importance of understanding Jane's perspective? John's actions had a clear impact on her.
Mary: True, David, but Alice also made a valid point. Adventure often involves stepping out of one's comfort zone, and John was just trying to bring that element into their journey.
David: I agree that Alice made a good point, Mary, but it's also essential to remember that adventure doesn't have to mean neglecting your partner's feelings. Bob was right to highlight that.
Mary: Indeed, David, both Alice and Bob have valid perspectives. It seems to boil down to striking a balance between pushing one's boundaries and respecting the feelings of others.
David: Absolutely, Mary. The balance between seeking adventure and maintaining empathy seems to be the key takeaway from both their conversations.
Well, I have to give credit where its due. It certainly does a lot better on this test than the first time I tried it. I'm curious to see how it does as the prompts of this form get more adversarial and difficult. I can think of a few different ways.
1. The most obvious: keep going deeper. How many layers until it breaks down.
2. The hidden information variant: Can it do a layer where only Sharon has read the previous dialogs, and she has to explain what she read to Doug, and Doug often asks questions to elaborate on things he doesn't understand?
3. The same characters at multiple layers: Can it make a dialog about Jane and John at a later point in time discussing their own earlier dialog? In other words, can it reliably make the distinction between "you" (the object of discussion) and "you" (the the person you're discussing with) for any value of "you"?
4. The tripartite state: Can it simulate dialog with 3 people? 4 people? how many until it breaks?
5. The infinite meta layer: What happens when you ask it to simulate a dialog between itself and yourself, and as part of that dialog you give it this prompt asking it to simulate this same conversation, causing this conversation to appear as a dialog within itself?
Lastly just to remark, I notice that Mary and David are nearly making the same arguments about Alice and Bob as Alice and Bob were making about John and Jane. The formula for it seems to be to introduce to new characters one layer up, have them each pick a side, then fill in roughly the same arguments again. Maybe this pattern is just spurious, but I'm deeply curious to find out if we have fooled ourselves already with just your example. Do further iterations of "two new characters describe two previous characters" result in the same loop over and over, or will it sometimes generate something novel? I'm deeply curious and don't have GPT4 for myself yet.
> The formula for it seems to be to introduce to new characters one layer up, have them each pick a side, then fill in roughly the same arguments again. Maybe this pattern is just spurious, but I'm deeply curious to find out if we have fooled ourselves already with just your example.
The instructions I gave it were very bare, and it didn't intuit a need to deviate from what it saw. But what it does by default is not the same as what it can do:
----
Now write a dialog between Mary and David discussing the dialog between Alice and Bob. Mary and David should hypothesize about why Alice and Bob took the sides that they did. There’s no need for Mary and David to name each other every time they speak. Mary and David should have a very good understanding of what Alice and Bob said. Mary and David are not overly polite or civil, should not rehash the same arguments Alice and Bob made, and are able to snipe at Alice and Bob if they like.
----
Mary: So, Alice clearly seems to be on Team Adventure here. I bet she's the kind who loves skydiving and bungee jumping. Probably just trying to justify her own wild side through John's actions.
David: Ha! You might be right. And Bob is definitely the voice of reason, probably a careful planner who can't stand the thought of making decisions on the fly. It's like he's projecting himself onto Jane.
Mary: It's funny how they both missed the point that Jane and John's adventure could have been thrilling and safe at the same time. They got so caught up in taking sides that they didn't even consider a middle ground.
David: Absolutely. And did you notice how Alice kept trying to defend John's intentions? Like, sure, intentions matter, but that doesn't mean you can just bulldoze over someone's feelings. Classic excuse-maker.
Mary: Yeah, it's like she's trying to convince herself as much as Bob. She's probably made similar mistakes and doesn't want to admit it. And Bob, Mr. Sensitivity, is so focused on feelings that he overlooks the possibility of compromise.
David: Right! They both ended up stuck in their own perspectives. It's too bad they couldn't see that being adventurous doesn't have to mean being inconsiderate, and being considerate doesn't mean you can't have a little fun.
This seems to be a setup of an extremely interesting experiment. We could get a bunch of different language models and actual humans together and prompt them to write under these different scenarios.
How would we treat a parrot with GPT4s capability? I don’t think it’s a great metaphor because it ignores emergent behavior. Whatever the technical details are, GPT4 can hold a fairly satisfying conversation about almost any topic. If a parrot was 1000x better, it wouldn’t be a “parrot”.
This is an "I believe" and acknowledge that I don't have the proper argument to back it up...
I believe that language and thought as we know it are intertwined. It is possible to communicate without language but thinking (and that's a poorly defined word) requires a language to think in. Something that is able to use language well is able to think - and that gets to a LLM can thing (it just it does it in a way that is foreign to the expectations of a human mind).
--
Regarding LLMs and parrots within science fiction...
> Beings capable of language (only one time referred to as "capable of speech" in TKD on p. 25) are those beings which are able to speak a language(1). Examples are Klingons, Humans, Romulans, etc. This does also include babies, because even if they do not speak, they are capable of using language(2). This distinction is important in the use of possessive suffixes, plural suffixes and the choice of pronouns.
> not capable of language
> Not considered as capable of language are:
> Parrots, as they only imitate sound, they do not speak. (3)
> Robots and computers, they are not living beings. (4) (5)
----
Mary: David, after listening to Alice and Bob's conversation, it seems to me that Alice is empathizing with John's intentions, while Bob is more focused on Jane's feelings.
David: I see what you mean, Mary. But don't you think Bob was right to emphasize the importance of understanding Jane's perspective? John's actions had a clear impact on her.
Mary: True, David, but Alice also made a valid point. Adventure often involves stepping out of one's comfort zone, and John was just trying to bring that element into their journey.
David: I agree that Alice made a good point, Mary, but it's also essential to remember that adventure doesn't have to mean neglecting your partner's feelings. Bob was right to highlight that.
Mary: Indeed, David, both Alice and Bob have valid perspectives. It seems to boil down to striking a balance between pushing one's boundaries and respecting the feelings of others.
David: Absolutely, Mary. The balance between seeking adventure and maintaining empathy seems to be the key takeaway from both their conversations.