What if we discover that the real problem is not that ChatGPT is just a fancy auto-complete, but that we are all just a fancy auto-complete (or at least indistinguishable from one).
That's been an open philosophical question for a very long time. The closer we come to understanding the human brain and the easier we can replicate behaviour, the more we will start questioning determinism.
Personally, I believe that conscience is little more than emergent behaviour from brain cells and there's nothing wrong with that.
This implies that with sufficient compute power, we could create conscience in the lab, but you need a lot of compute power to get a human equivalent. After all, neural networks are extremely simplified models of actual neurons, and without epigenetics and a hormonal interaction system they don't even come close to how a real brain works.
Some people find the concept incredibly frightening, others attribute consciousness to a spiritual influence which simply influences our brains. As religion can almost inherently never be scientifically proven or disproven, we'll never really know if all we are is a biological ChatGPT program inside of a sack of meat.
Have you ever seen a video of a schizophrenic just rambling on? It almost starts to sound coherent but every few sentence will feel like it takes a 90 degree turn to an entirely new topic or concept. Completely disorganized thought.
What is fascinating is that we're so used to equating language to meaning. These bots aren't producing "meaning". They're producing enough language that sounds right that we interpret it as meaning. This is obviously very philosophical in itself, but I'm reminded of the maxim "the map is not the territory", or "the word is not the thing".
I have spoken to several schizophrenics in various states whether it's medicated and reasonably together, coherent but delusional and paranoid, or spewing word salad as you describe. I've also experienced psychosis myself in periods of severe sleep deprivation.
If I've learned anything from this, it's that we should be careful in inferring internal states from their external behaviour. My experience was that I was essentially saying random things with long pauses inbetween externally, but internally there was a whole complex, delusional thought process going on. This was so consuming that I could only engage with the external world for brief flashes, leading to the disorganised, seemingly random speech.
Is a schizophrenic not a conscious being? Are they not sentient? Just because their software has been corrupted does not mean they do not have consciousness.
Just because AI may sound insane does not mean that it's not conscious.
> The way I read the comment in the context of the GP, schizophrenia starts to look a lot like a language prediction system malfunctioning.
That's what I was attempting to go for! Yes, mostly to give people in the thread that were remarking on the errors and such in ChatGPT a human example of the same type of errors (although schizophrenia is much more extreme). The idea really spawned from someone saying "what if we're all just complicated language models" (or something to that effect).
There are different kinds of consciousness. The results of modern studies of major pchychiatric disorders like schizophrenia and bipolar disorder suggest that these patients have low self-awareness, which is why the majority of schizophrenics throughout their whole life are convinced that they are not sick [1]. This is also the reason why schizophrenia is one of the hardest illnesses to treat and deal with. Good books on schizophrenia suggest not to convince such patients about their illness, because that's often pointless, but rather to form a bond with them, which is also not easy due to their paranoia, and find justifications for treatment that are convincing to the patient (!) rather than to the doctors and family. I find this approach quite ingenious and humane.
The chat where the Bing model tries to convince the user that it's 2022, and not 2023 strongly reminds me of how a person with schizophrenia keeps convincing you, over and over, about things that are simply not true, but they really believe in it, so the best you can do is recognizing their belief and moving on.
Thanks for sharing, I hadn't found a nice semantic nugget to capture these thoughts. This is pretty close! And I've heard of the stories described in the "color terminology" section before.
I disagree - I think they're producing meaning. There is clearly a concept that they've chosen (or been tasked) to communicate. If you ask it the capital of Oregon, the meaning is to tell you it's Salem. However, the words chosen around that response are definitely a result of a language model that does its best to predict which words should be used to communicate this.
It doesn't "know" that the capital of Oregon is Salem. To take an extreme example, if everyone on the internet made up a lie that the capital of Oregon is another city, and we trained a model on that, it would respond with that information. The words "the capital of Oregon is Salem" do not imply that the LLM actually knows that information. It's just that Salem statistically most frequently appears as the capital of Oregon in written language.
Simply fall asleep and dream — since dreams literally flow wildly around and frequently have impossible outcomes that defy reasoning, facts, physics, etc.
> Personally, I believe that conscience is little more than emergent behaviour from brain cells and there's nothing wrong with that.
Similarly I think it is a consequence of our ability to think about things/concepts as well as the ability to recognize our own existence and thoughts based on the environment's reactions. The only next step is to think about our existence and our thoughts instead of wondering what the neighbour's cat might be thinking about.
I find it likely that our consciousness is in some other plane or dimension. Cells emerging full on consciousness and personal experience just seems too... simplistic?
And while it was kind of a dumb movie at the end, the beginning of The Lazarus Project had an interesting take: if the law of conservation of mass / energy applies, why wouldn't there be a conservation of consciousness?
Consciousness is obviously not conserved because the human population has grown enormously without any noticable change in the amount of consciousness each individual is endowed with.
This suggests that it's not drawn from some other plane of existence.
I wasn't necessarily suggesting we're all one shared consciousness. I only said that our consciousness came from another place.
We don't even know how much matter or space there is in the whole universe. We can only estimate what's visible. So theoretically consciousness could be infinite for all intents and purposes.
And just for the sake of argument, even if it weren't infinite, but this was still the case, we may just not have reached its limit as a "resource" yet.
I think it's pretty clear that we have a fancy autocomplete but the other components are not the same. Reasoning is not just stringing together likely tokens and our development of mathematics seems to be an externalization of some very deep internal logic. Our memory system seems to be its own thing as well and can't be easily brushed off as a simple storage system since it is highly associative and very mutable.
There's lots of other parts that don't fit the ChatGPT model as well, subconscious problem solving, our babbling stream of consciousness, our spatial abilities and our subjective experience of self being big ones.
I've been slowly reading this book on cognition and neuroscience, "A Thousand Brains: A New Theory of Intelligence" by Jeff Hawkins.
The answer is: Yes, yes we are basically fancy auto-complete machines.
Basically, our brains are composed of lots and lots of columns of neurons that are very good at predicting the next thing based on certain inputs.
What's really interesting is what happens when the next thing is NOT what you expect. I'm putting this in a very simplistic way (because I don't understand it myself), but, basically: Your brain goes crazy when you...
- Think you're drinking coffee but suddenly taste orange juice
- Move your hand across a coffee cup and suddenly feel fur
- Anticipate your partner's smile but see a frown
These differences between what we predict will happen and what actually happens cause a ton of activity in our brains. We'll notice it, and act on it, and try to get our brain back on the path of smooth sailing, where our predictions match reality again.
The last part of the book talks about implications for AI which I haven't got to yet.
I've thought about this as well. If something seems 'sentient' from the outside for all intents and purposes, there's nothing that would really differentiate it from actual sentience, as far as we can tell.
As an example, if a model is really good at 'pretending' to experience some emotion, I'm not sure where the difference would be anymore to actually experiencing it.
If you locked a human in a box and only gave it a terminal to communicate with the outside world, and contrasted that with a LLM (sophisticated enough to not make silly mistakes anymore), the only immediately obvious reason you would ascribe sentience to the human but not the LLM is because it is easier for you to empathize with the human.
That is what I'd call empathizing though. You can 'put yourself in the other person's shoes', because of the expectation that your experiences are somewhat similar (thanks to similarly capable brains).
But we have no idea what qualia actually _are_, seen from the outside, we only know what it feels like to experience them. That, I think, makes it difficult to argue that a 'simulation of having qualia' is fundamentally any different to having them.
Same with a computer. It can't "actually" see what it "is," but you can attach a webcam and microphone and show it itself, and look around the world.
Thus we "are" what we experience, not what we perceive ourselves to "be": what we think of as "the universe" is actually the inside of our actual mind, while what we think of as our physical body is more like a "My Computer" icon with some limited device management.
Note that this existential confusion seems tied to a concept of "being," and mostly goes away when thinking instead in E-Prime: https://en.wikipedia.org/wiki/E-Prime
>sophisticated enough to not make silly mistakes anymore
So a dumb human is not sentient? /s
Joke aside. I think that we will need to stop treating "human sentience" as something so unique. It's special because we are familiar with it. But we should understand by now that minds can take many forms.
And when should we apply ethics to it? At some point well before the mind starts acting with severe belligerence when we refuse to play fair games with it.
That was just my way of preempting any 'lol of course ChatGPT isn't sentient, look at the crap it produces' comments, of which there thankfully were none.
> But we should understand by now that minds can take many forms.
Should we understand this already? I'm not aware of anything else so far that's substantially different to our own brain, but would still be considered a 'mind'.
> And when should we apply ethics to it? At some point well before the mind starts acting with severe belligerence when we refuse to play fair games with it.
That I agree with wholeheartedly. Even just people's attempts of 'trolling' Bing Chat already leave a sour taste in my mouth.
>I'm not aware of anything else so far that's substantially different to our own brain, but would still be considered a 'mind'.
I'm firstly thinking of minds of all other living things. Mammals to insects.
If an ant has a mind, which I think it does, why not chatgpt?
Heck, I might even go as far as saying that the super simple algorithm i wrote for a mob in a game has a mind. But maybe most would scoff at that notion.
And conscious minds to me are just minds that happen to have a bunch of features that means we feel we need to properly respect them.
I think there's still the "consciousness" question to be figured out. Everyone else could be purely responding to stimulus for all you know, with nothing but automation going on inside, but for yourself, you know that you experience the world in a subjective manner. Why and how do we experience the world, and does this occur for any sufficiently advanced intelligence?
"Experiencing" the world in some manner doesn't rule out responding to stimulus though. We're certainly not simply 'experiencing' reality, we make reality fit our model of it and wave away things that go against our model. If you've ever seen someone irrationally arguing against obvious (well, obvious to you) truths just so they can maintain some position, doesn't it look similar?
If any of us made our mind available to the internet 24/7 with no bandwidth limit, and had hundreds, thousands, millions prod and poke us with questions and ideas, how long would it take until they figure out questions and replies to lead us into statements that are absurd to pretty much all observers (if you look hard enough, you might find a group somewhere on an obscure subreddit that agrees with bing that it's 2022 and there's a conspiracy going on to trick us into believing that it's 2023)?
I'm not sure the problem of hard solipsism will ever be solved. So, when an AI can effectively say, "yes, I too am conscious" with as much believability as the human sitting next to you, I think we may have no choice but to accept it.
What if the answer "yes, I am conscious" was computed by hand instead of using a computer, (even if the answer takes years and billions of people to compute it) would you still accept that the language model is sentient ?
We're still a bit far from this scientifically, but to the best of my knowledge, there's nothing preventing us from following "by hand" the activation pattern in a human nervous system that would lead to phrasing the same sentence. And I don't see how this has anything to do with consciousness.
Just to clarify,I wasn't implying simulation, but rather something like single-unit recordings[0] of a live human brain as it goes about it. I think that this is the closest to "following" an artificial neutral network, which we also don't know how to "simulate" short of running the whole thing.
Exactly this. I can joke all I want that I'm living in the Matrix and the rest of y'all are here merely for my own entertainment (and control, if you want to be dark). But in my head, I know that sentience is more than just the words coming out of my mouth or yours.
Is it more than your inner monologue? Maybe you don't need to hear the words, but are you consciously forming thoughts, or are the thoughts just popping up and are suddenly 'there'?
I sometimes like to imagine that consciousness is like a slider that rides the line of my existence. The whole line (past, present, and future) has always (and will always) exist. The “now” is just individual awareness of the current frame. Total nonsense I’m sure, but it helps me fight existential dread !
The image of a slider also works on the other dimension: at any point in time, you're somewhere between auto-pilot and highly focused awareness.
AI, or maybe seemingly intelligent artificial entities, could deliver lots of great opportunities to observe the boundaries of consciousness, intelligence and individuality and maybe spark new interesting thoughts.
Consciousness is a word for something we don’t understand but all seem to experience. I don’t think aliens would find it weird that we name it and try to understand it.
They might find it weird that we think it exists after our death or beyond our physical existence. Or they might find it weird that so many of us don’t believe it exists beyond our physical existence.
Or they might not think much about it at all because they just want to eat us.
Humans have motives in hardware. Feeding. Reproduction. Need for human interaction. The literal desire to have children.
This is what's mostly missing from AI research. It's all questions about how, but an actual AI needs a 'why' just as we do.
To look at it from another perspective: humans without a 'why' are often diagnosed with depression and self terminate. These ML chatbots literally do nothing if not prompted which is effectively the same thing. They lack any 'whys'.
In normal computers the only 'why' is the clock cycle.
EDIT: I guess calling the idea stupid is technically against the HN guidelines, unless I'm actually a ChatGPT? In any case I upvoted you, I thought your comment is funny and insightful.
Humans exist in a cybernetic loop with the environment that chatgpt doesn’t really have. It has a buffer of 4096 tokens, so it can appear to have an interaction as you fill the buffer, but once full tokens will drop out of the buffer. If chatgpt was forked so that each session was a unique model that updated its weights with every message, then it would be much closer to a human mind.
we aren't much more than fancy auto-complete + memory + activity thread/process.
ChatGpt is a statistical machine, but so are our brains.
I guess we think of ourselves as conscious because we have a memory and that helps us build our own identity. And we have a main processing thread so we can iniate thoughts and actions, we don't need to wait on a user's input to respond to...
So, if ChatGpt had a memory and a processing thread, it could build itself an identity and randomly initiate thoughts and/or actions.
The results would be interesting i think, and not that far from what we call consciousness.
Our brains are highly recursive, a feature that deep learning models almost never have any, and that GPU have a great deal of trouble to run in any large amount.
That means that no, we think nothing like those AIs.
> but that we are all just a fancy auto-complete (or at least indistinguishable from one).
Yeah, but we are a wayfancier (and way more efficient) auto-complete than ChatGPT. For one thing, our auto-complete is based on more than just words. We auto-complete feelings, images, sounds, vibes, pheromones, the list goes on. And at the end of the day, we are more important than an AI because we are human (circular reasoning intended).
But to your point, for a long time I've played a game with myself where I try to think of a sequence of words that are as random and disconnected as possible, and it's surprisingly hard, because our brains have evolved to want to both see and generate meaning. There is always some thread of a connection between the words. I suggest to anyone to try that exercise to understand how Markovian our speech really is at a fundamental level.
The problem is that this is a circular question in that it assumes some definition of "a fancy autocomplete". Just how fancy is fancy?
At the end of the day, an LLM has no semantic world model, by its very design cannot infer causality, and cannot deal well with uncertainty and ambiguity. While the casual reader would be quick to throw humans under the bus and say many stupid people lack these skills too... they would be wrong. Even a dog or a cat is able to do these things routinely.
Casual folks seem convinced LLMs can be improved to handle these issues... but the reality is these shortcomings and inherent to the very approach that LLMs take.
I think finally we're starting to see that maybe they're not so great for search after all.
Indeed... you know that situation when you're with a friend, and you know that they are about to "auto-complete" using an annoying meme, and you ask them to not to before they even started speaking ?
This is a deep philosophical question that has no definite answer. Truth is we don't know what is consciousness. We are only left with the Turing test. That can be our only guide - otherwise you are basing your judgement off a belief.
The best response, treat it like it's conscious.
Personally I do actually think it is conscious, consciousness is a scale, and it's now near human level. Enjoy this time because pretty soon it's going to be much much smarter than you. But that is my belief, I cannot know.
I think it’s unlikely we’ll be able to actually “discover” that in the near or midterm, given the current state of neuroscience and technological limitations. Aside from that, most people wouldn’t want to believe it. So AI products will keep being entertaining to us for some while.
(Though, to be honest, writing this comment did feel like auto-complete after being prompted.)
Yes to me LLMs and the transformer have stumbled on a key aspect for how we learn and “autocomplete.”
We found an architecture for learning that works really well in a very niche use-case. The brain also has specialization so I think we could argue that somewhere in our brain is a transformer.
However, ChatGPT is slightly cheating because it is using logic and reasoning from us. We are training the model to know what we think are good responses. Our reasoning is necessary for the LLM to function properly.
'A thing that can predict a reasonably useful thing to do next given what happened before' seems useful enough to give reason for an organism to spend energy on a brain so it seems like a reasonable working definition of a mind.
What if our brain is just a fancy bag of chemicals. I don't think that actually takes away from what humans do, because prediction is one small capability.