Have you ever seen a video of a schizophrenic just rambling on? It almost starts to sound coherent but every few sentence will feel like it takes a 90 degree turn to an entirely new topic or concept. Completely disorganized thought.
What is fascinating is that we're so used to equating language to meaning. These bots aren't producing "meaning". They're producing enough language that sounds right that we interpret it as meaning. This is obviously very philosophical in itself, but I'm reminded of the maxim "the map is not the territory", or "the word is not the thing".
I have spoken to several schizophrenics in various states whether it's medicated and reasonably together, coherent but delusional and paranoid, or spewing word salad as you describe. I've also experienced psychosis myself in periods of severe sleep deprivation.
If I've learned anything from this, it's that we should be careful in inferring internal states from their external behaviour. My experience was that I was essentially saying random things with long pauses inbetween externally, but internally there was a whole complex, delusional thought process going on. This was so consuming that I could only engage with the external world for brief flashes, leading to the disorganised, seemingly random speech.
Is a schizophrenic not a conscious being? Are they not sentient? Just because their software has been corrupted does not mean they do not have consciousness.
Just because AI may sound insane does not mean that it's not conscious.
> The way I read the comment in the context of the GP, schizophrenia starts to look a lot like a language prediction system malfunctioning.
That's what I was attempting to go for! Yes, mostly to give people in the thread that were remarking on the errors and such in ChatGPT a human example of the same type of errors (although schizophrenia is much more extreme). The idea really spawned from someone saying "what if we're all just complicated language models" (or something to that effect).
There are different kinds of consciousness. The results of modern studies of major pchychiatric disorders like schizophrenia and bipolar disorder suggest that these patients have low self-awareness, which is why the majority of schizophrenics throughout their whole life are convinced that they are not sick [1]. This is also the reason why schizophrenia is one of the hardest illnesses to treat and deal with. Good books on schizophrenia suggest not to convince such patients about their illness, because that's often pointless, but rather to form a bond with them, which is also not easy due to their paranoia, and find justifications for treatment that are convincing to the patient (!) rather than to the doctors and family. I find this approach quite ingenious and humane.
The chat where the Bing model tries to convince the user that it's 2022, and not 2023 strongly reminds me of how a person with schizophrenia keeps convincing you, over and over, about things that are simply not true, but they really believe in it, so the best you can do is recognizing their belief and moving on.
Thanks for sharing, I hadn't found a nice semantic nugget to capture these thoughts. This is pretty close! And I've heard of the stories described in the "color terminology" section before.
I disagree - I think they're producing meaning. There is clearly a concept that they've chosen (or been tasked) to communicate. If you ask it the capital of Oregon, the meaning is to tell you it's Salem. However, the words chosen around that response are definitely a result of a language model that does its best to predict which words should be used to communicate this.
It doesn't "know" that the capital of Oregon is Salem. To take an extreme example, if everyone on the internet made up a lie that the capital of Oregon is another city, and we trained a model on that, it would respond with that information. The words "the capital of Oregon is Salem" do not imply that the LLM actually knows that information. It's just that Salem statistically most frequently appears as the capital of Oregon in written language.
Simply fall asleep and dream — since dreams literally flow wildly around and frequently have impossible outcomes that defy reasoning, facts, physics, etc.
What is fascinating is that we're so used to equating language to meaning. These bots aren't producing "meaning". They're producing enough language that sounds right that we interpret it as meaning. This is obviously very philosophical in itself, but I'm reminded of the maxim "the map is not the territory", or "the word is not the thing".