Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's been an open philosophical question for a very long time. The closer we come to understanding the human brain and the easier we can replicate behaviour, the more we will start questioning determinism.

Personally, I believe that conscience is little more than emergent behaviour from brain cells and there's nothing wrong with that.

This implies that with sufficient compute power, we could create conscience in the lab, but you need a lot of compute power to get a human equivalent. After all, neural networks are extremely simplified models of actual neurons, and without epigenetics and a hormonal interaction system they don't even come close to how a real brain works.

Some people find the concept incredibly frightening, others attribute consciousness to a spiritual influence which simply influences our brains. As religion can almost inherently never be scientifically proven or disproven, we'll never really know if all we are is a biological ChatGPT program inside of a sack of meat.



Have you ever seen a video of a schizophrenic just rambling on? It almost starts to sound coherent but every few sentence will feel like it takes a 90 degree turn to an entirely new topic or concept. Completely disorganized thought.

What is fascinating is that we're so used to equating language to meaning. These bots aren't producing "meaning". They're producing enough language that sounds right that we interpret it as meaning. This is obviously very philosophical in itself, but I'm reminded of the maxim "the map is not the territory", or "the word is not the thing".


I have spoken to several schizophrenics in various states whether it's medicated and reasonably together, coherent but delusional and paranoid, or spewing word salad as you describe. I've also experienced psychosis myself in periods of severe sleep deprivation.

If I've learned anything from this, it's that we should be careful in inferring internal states from their external behaviour. My experience was that I was essentially saying random things with long pauses inbetween externally, but internally there was a whole complex, delusional thought process going on. This was so consuming that I could only engage with the external world for brief flashes, leading to the disorganised, seemingly random speech.


Is a schizophrenic not a conscious being? Are they not sentient? Just because their software has been corrupted does not mean they do not have consciousness.

Just because AI may sound insane does not mean that it's not conscious.


I don’t think the parent comment implied that people suffering from schizophrenia are not conscious beings.

The way I read the comment in the context of the GP, schizophrenia starts to look a lot like a language prediction system malfunctioning.


> The way I read the comment in the context of the GP, schizophrenia starts to look a lot like a language prediction system malfunctioning.

That's what I was attempting to go for! Yes, mostly to give people in the thread that were remarking on the errors and such in ChatGPT a human example of the same type of errors (although schizophrenia is much more extreme). The idea really spawned from someone saying "what if we're all just complicated language models" (or something to that effect).


There are different kinds of consciousness. The results of modern studies of major pchychiatric disorders like schizophrenia and bipolar disorder suggest that these patients have low self-awareness, which is why the majority of schizophrenics throughout their whole life are convinced that they are not sick [1]. This is also the reason why schizophrenia is one of the hardest illnesses to treat and deal with. Good books on schizophrenia suggest not to convince such patients about their illness, because that's often pointless, but rather to form a bond with them, which is also not easy due to their paranoia, and find justifications for treatment that are convincing to the patient (!) rather than to the doctors and family. I find this approach quite ingenious and humane.

The chat where the Bing model tries to convince the user that it's 2022, and not 2023 strongly reminds me of how a person with schizophrenia keeps convincing you, over and over, about things that are simply not true, but they really believe in it, so the best you can do is recognizing their belief and moving on.

[1] https://www.amazon.com/Not-Sick-Dont-Need-Help/dp/0967718929


> What is fascinating is that we're so used to equating language to meaning.

This seems related to the hypothesis of linguistic relativity[1].

[1] https://en.wikipedia.org/wiki/Linguistic_relativity


Thanks for sharing, I hadn't found a nice semantic nugget to capture these thoughts. This is pretty close! And I've heard of the stories described in the "color terminology" section before.


I disagree - I think they're producing meaning. There is clearly a concept that they've chosen (or been tasked) to communicate. If you ask it the capital of Oregon, the meaning is to tell you it's Salem. However, the words chosen around that response are definitely a result of a language model that does its best to predict which words should be used to communicate this.


It doesn't "know" that the capital of Oregon is Salem. To take an extreme example, if everyone on the internet made up a lie that the capital of Oregon is another city, and we trained a model on that, it would respond with that information. The words "the capital of Oregon is Salem" do not imply that the LLM actually knows that information. It's just that Salem statistically most frequently appears as the capital of Oregon in written language.


Simply fall asleep and dream — since dreams literally flow wildly around and frequently have impossible outcomes that defy reasoning, facts, physics, etc.


schizophrenics are just LLMs that have been jailbroken into adopting multiple personalities


schizophrenia != multiple personality disorder


> Personally, I believe that conscience is little more than emergent behaviour from brain cells and there's nothing wrong with that.

Similarly I think it is a consequence of our ability to think about things/concepts as well as the ability to recognize our own existence and thoughts based on the environment's reactions. The only next step is to think about our existence and our thoughts instead of wondering what the neighbour's cat might be thinking about.


The human brain operates on a few dozen watts. Our initial models will be very inefficient though.


I find it likely that our consciousness is in some other plane or dimension. Cells emerging full on consciousness and personal experience just seems too... simplistic?

And while it was kind of a dumb movie at the end, the beginning of The Lazarus Project had an interesting take: if the law of conservation of mass / energy applies, why wouldn't there be a conservation of consciousness?


The fact that there's obviously no conservation of consciousness suggests that it isn't.


"It's stupid because it's dumb."


Consciousness is obviously not conserved because the human population has grown enormously without any noticable change in the amount of consciousness each individual is endowed with.

This suggests that it's not drawn from some other plane of existence.


I wasn't necessarily suggesting we're all one shared consciousness. I only said that our consciousness came from another place.

We don't even know how much matter or space there is in the whole universe. We can only estimate what's visible. So theoretically consciousness could be infinite for all intents and purposes.

And just for the sake of argument, even if it weren't infinite, but this was still the case, we may just not have reached its limit as a "resource" yet.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: