Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is artificial consciousness achievable? Lessons from the human brain (arxiv.org)
216 points by wonderlandcal on May 19, 2024 | hide | past | favorite | 578 comments


I’m writing this comment so that people who want to know more about alternative theories of consciousness (to materialism/physicalism [1]) can know where to go to find well-argued positions on the topic.

(To be clear, I’m not here to argue about the topic or try to persuade anyone of any position – that’s a waste of everyone’s time).

I recommend seeking out discussions involving:

- Federico Faggin: inventor of silicon-gate technology and developer of the earliest microprocessors;

- Bernardo Kastrup: Ph.D. in computer engineering (reconfigurable computing, artificial intelligence), former CERN engineer at the LHC;

- Donald D. Hoffman: Ph.D. in computational psychology, professor in Cognitive Sciences at UC Irvine.

On YouTube you can find plenty of discussions involving these figures, some with each other, and plenty more with others.

I’d suggest it’s particularly important to explore these discussions as dispassionately as possible if you regard materialism as the only theory of mind that has any scientific credibility or validity.

As Christopher Hitchens reminds us in his legendary oration on John Stuart Mill and free speech [2], it’s only by thoroughly understanding the opposing view that we can thoroughly understand our own position on any topic.

[1] https://en.wikipedia.org/wiki/Materialism

[2] https://youtu.be/zDap-K6GmL0?t=120


A philosophical framework in which creating an artificial entity that is conscious and self-aware in a human-like manner is as straightforward as modeling the human brain is monistic materialism.

Of course, it’s not the only framework available. Among the modern takes, Donald Hoffman’s interface theory of perception (explored in, say, his Objects of Consciousness paper[0]) is an interesting one that appears to align with monistic idealism, for example.

Being wrong about this is generally not that impactful, until it concerns policies around ML. Adopting the former means we may have conscious software, which presumably should be granted human rights. However, if we hold the latter, manufacturing a “true” artificial consciousness may be unachievable using the means we employ (it might be just a philosophical zombie).

[0] I don’t personally endorse the paper or his views, but they can be an acceptable starting point for a technical person interested in exploring monistic idealism: https://www.frontiersin.org/journals/psychology/articles/10....


> we may have conscious software, which presumably should be granted human rights

A dog (most likely) has consciousness, but no human rights.


It has animal rights, which are broadly commensurate with the level of consciousness and agency it’s deemed to have.

Mammals and other animals have legal protections not afforded to fish and insects.


In some countries… some countries hardly observe basic human rights much less any animal rights. Some have none on the books.


Sure but this is nitpicking, as is your GP comment, and neither refute the point that the original commenter was making: modern/advanced societies have laws to protect conscious beings from exploitation and cruelty.

(As I was writing the comment I thought “ugh will someone chime in and point out that not all countries have strong animal protection laws? Do I really need to preempt that in my comment?”)


I was not nitpicking, but I could have spent more time on my reply.

What I was hinting at is that it is not simply consciousness that gets us these laws. Laws have been around for a long time, and have many different reasons for existing and persisting to exist. The most rational reason for laws is probably that it helps us to thrive as a species.

IMHO laws do not easily extend to animals or other organisms, let alone AI systems. What is the use of animal rights laws if you can simply get killed to be eaten (cows, pigs), or if you are considered a nuisance (bugs). What would be the reason to provide AIs with protection laws if they have no memory, no emotion, and no pain?


>Modern/advanced societies have laws to protect conscious beings from exploitation and cruelty.

I can think of no such society where this is generally true. One need only consider that pigs are far smarter than dogs and then the median pig’s life in said societies.


Laws exist that ban practices that are - according to those who set and enforce the laws - excessively cruel to pigs. That’s all this discussion is about.

That pigs are still treated with cruelty is a terrible thing, and I’d happily see more done to protect all animals against cruelty. But it’s a separate argument to what’s relevant here.


No, unfortunately, this is goalpost shifting. We’ve gone from “cruelty” to “excessive cruelty,” nor are there any laws that prevent the common sense understanding of either thing from happening to the median pig. One can here also point out that we in fact engage in such extreme cruelty that they have to make it illegal to document it:

https://www.vox.com/future-perfect/2019/1/11/18176551/ag-gag...


But does a bug have consciousness? What about a bacteria?


I suppose the bug has, but the bacteria doesn't. I'd assume that some kind of memory is required for consciousness.

I'd even go so far to say that consciousness is nothing more than having a memory of the state you were in.


Bacteria cells absolutely have types of memory. And by your definition a Python program written by any random undergrad in CS 101 has consciousness.


In order for my definition to make sense, the organism or program must be able to observe the memory of the state. In the case of the bacteria and the Python program, I doubt they are able to do that in any meaningful way.

But I would not mind if a slightly more involved program, or a system of plants for that matter, would be considered conscious. The basic definition seems fairly irrelevant, and it obviously matters how much the specific type of consciousness matches our own experience for us humans to actually care.


Just handwavy nonsense. What counts as “observing”? Obviously the bacterial system will “observe” the memory when using it determine current behavior. If the basic definition is irrelevant, why did you post a comment outlining a claim of what consciousness is “nothing more than”? This is silly and not worth further engagement.


Note that I tried to counter the idea that an AI should presumably get human rights. In that context, I think a definition of consciousness is irrelevant.


If a chatbot obtained from an emulation of a human brain behaves in a human-like manner, and is attributed consciousness and self-awareness, good luck arguing that its consciousness is like that of a dog.

…and even if you succeed, abusing a dog in the way we abuse ML-based products would not be acceptable in any developed country.


If we abuse a dog we have no way to restore it to its previous state. With computer programs we have a perfect time machine, so any thing that one may call "horrible" can be done and then undone without any moral consequences.

This btw also is the answer to the question of evil in religion, god can do whatever he wants without being evil, because it is effectively all in his imagination, and for the people living in our computers we'll be gods.


There are many leaps here, but the biggest leap is how reverting abuse later nullifies suffering incurred.


Because the sufferer is back exactly at the state before suffering, without even a recollection of suffering. And all the suffering was merely number multiplication.


This belief is very, very far from mainstream. It implies that suffering is a complete non-issue in general—after all, we all die and at that point by definition don’t have a recollection of prior suffering (or anything, really).

Since “resetting” is equivalent to killing a being and creating a new being in its place, saying it nullifies all suffering already incurred can take you on a very dangerous path.


> Being wrong about this is generally not that impactful, until it concerns policies around ML. Adopting the former means we may have conscious software, which presumably should be granted human rights. However, if we hold the latter, manufacturing a “true” artificial consciousness may be unachievable using the means we employ (it might be just a philosophical zombie).

I think in practice, it will depend on how AIs behave. If an AI forms relationships with humans which humans experience as emotionally significant, and in which it displays individuality and autonomy, humans will begin thinking of it as a real "person", and want to endow it with the rights of a person. Conversely, if AI acts in an emotionless way, or displays no emotional autonomy (a sultry voice saying "I'm here to do anything to make you happy, and when I say anything I really mean anything"), or lack individuality ("I'm OpenAI GPT-9, and there are 13,784,312 fungible instances of me currently running")–most people won't think of it as a real "person", and will not take the idea of "AI rights" seriously.

From a materialist philosophical perspective, whether an AI is a "person" should be independent of whether humans are inclined to view it as one. However, certain non-materialist philosophies might suppose actual reality to be more in accord with those human perceptions – e.g. if sufficiently many soul-bearers reasonably believe something has (or ought to have) a soul, God would feel thereby obliged to endow it with one. (The idea doesn't necessarily have to be theistic: one could suppose there exists some kind of impersonal transcendent law of soul-endowment, much like how karma is viewed in Buddhism and Hinduism.)


>I think in practice, it will depend on how AIs behave. If an AI forms relationships with humans which humans experience as emotionally significant, and in which it displays individuality and autonomy, humans will begin thinking of it as a real "person", and want to endow it with the rights of a person.

I can't help but feel that describes our relationship with animals we consider pets, albeit less so the individuality and autonomy part. On the other hand, if our pets ended up with human-level intelligence tomorrow, I am doubtful most humans would be rushing to confer full rights upon them.


> I can't help but feel that describes our relationship with animals we consider pets, albeit less so the individuality and autonomy part.

Many pets have a lot of individuality and autonomy. Cats and dogs have their own individual personalities and likes/dislikes, and they have their own desires which sometimes contradict those of their human owners (e.g. your dog's desire to chew your brand new expensive shoes, versus your desire for them not to be chewed)

> On the other hand, if our pets ended up with human-level intelligence tomorrow, I am doubtful most humans would be rushing to confer full rights upon them.

I disagree. I think for most people who reject strong versions of animal rights ("animals have fundamentally the same rights as humans do"), it is their lack of human-level intelligence which they use to justify to themselves that rejection. If an animal demonstrated genuine human-level intelligence, such that it was obvious to all and no one could deny it, I think the clear majority of people would be willing to extend human-level rights to that animal.


>If an animal demonstrated genuine human-level intelligence, such that it was obvious to all and no one could deny it, I think the clear majority of people would be willing to extend human-level rights to that animal.

You're probably right.

Then again, recent history hasn't bode well for whales, or the 50M or so human beings that other human beings keep enslaved in 2024.

Maybe AI just needs its own Short Circuit 2 story. I'm just having a hard time seeing that play out as opposed to Johnny Five ending up as a SaaS product.


> Then again, recent history hasn't bode well for whales,

That’s because the claim that whales possess human-level intelligence is controversial and unproven. You can’t have a conversation with a whale. While whale language has a certain complexity, we don’t know whether it is capable of conveying the kind of abstract conceptual ideas which human language can.

> or the 50M or so human beings that other human beings keep enslaved in 2024

Most enslavers, their disagreement is not with the idea that slaves are human or entitled to fundamental human rights, their disagreement is with the idea that fundamental human rights include a universal right not to be enslaved. And to be honest, the idea of such a universal human right is historically rather novel - in all four of Europe, Asia, Africa and the pre-Columban Americas, slavery has a history going back thousands of years, and it is only in the last couple of centuries that an (incomplete) consensus has emerged that it is universally wrong


You mean human-like, not human-level. Intelligence cannot be identified, much less ranked, without sufficient shared perceptional, cultural, etc. background. You can find two humans who are alien enough to each other that recognising shared intelligence would take a bit of effort; times a million that for a sufficiently non-human system like a whale, more still for an anthill, more still for an oak forest.


I think the real crux of the matter would be the difference between rights versus stuff that costs people money.


> I think the real crux of the matter would be the difference between rights versus stuff that costs people money.

A human child costs a lot more than a dog or cat does, and yet society grants the child vastly greater rights than the dog or cat. The difference is due to cognitive capacity - a smart five year old can give a lecture about how you are allegedly violating their rights (e.g. “making me go to bed on time violates my right to have fun”)-no dog or cat in the world can do that


I don't understand what that comparison is intended to prove.

Sure, a 5-year-old is more eloquent than a dog, but throughout history there have been lots of 5-year-olds that were still enslaved, along with adults that were even more eloquent and intelligent.

My point is that talk is cheap. It's one thing to agree that a sheepdog has freedom of speech, but there will be a lot more resistance if we start talking about reparations for years of sheep-herding labor.


We are having this conversation assuming a society which rejects slavery in principle. Given that rejection, I don't see how past history of enslaving people is relevant to questions of the future–unless one supposes that rejection is going to be reversed at some point, which seems unlikely.


I do not think cognitive capacity is the motivation here: human children are our offspring.

It's simple favoritism of ones offspring, which would seem to be a primary motivator here.


> human children are our offspring.

> It's simple favoritism of ones offspring, which would seem to be a primary motivator here.

But some random child on the other side of the planet isn't one of my offspring.


> I'm OpenAI GPT-9, and there are 13,784,312 fungible instances of me currently running

This reminded me of https://qntm.org/mmacevedo


> However, if we hold the latter, manufacturing a “true” artificial consciousness may be unachievable using the means we employ (it might be just a philosophical zombie).

Coincidentally, this is why I'm generally wary of this line of thinking. It does remind me of Descartes, who was arguing that animals couldn't "really" feel pain (and thus e.g. vivisection is perfectly ethical): https://en.wikipedia.org/wiki/Ren%C3%A9_Descartes#On_animals


I may believe that an animal (or indeed another person) has consciousness and can suffer because they are sufficiently similar to me in how they came about to exist, but also not believe that about a constructed entity that lacks any such similarities, regardless of what it outputs—those two views are not in conflict.

I do share your worry that rejecting the latter can lead to rejecting the former in some cases, however.


To be fair 'modelling the brain' might not include things like neuron metabolism that probably isn't required for AI but is a part of the substrate of our own consciousness.


> creating an artificial entity that is conscious and self-aware in a human-like manner is as straightforward as modeling the human brain is monistic materialism.

Why then, not modelling such ebtity isn't creating a self-aware entity? After all, the outcome of a computation does not depend on whether it is actually done.


There are computations, the outcomes of which are unknown unknown until actually done.


What changes when they are known? I believr it affects yout consciousness, not the one being simulated. The latter does not have "I'm being actually simulated" input


"Actually simulated" is such an oxymoron. So which is it? Is the consciousness actual, simulated, or actually simulated? And does the resulting state of mind change the universe, or merely reveal its hidden structure?

It's easy to get lost in these unsolvable paradoxes when you try reducing all of creation down to logic. Problem is, logic is not all of creation.

Consciousness requires a soul. Otherwise you're confused stardust sans mission.

A computer is just a calculator. You might as well ask if {addition, subtraction, multiplication, division} is God.


> Consciousness requires a soul

What's a soul? How can you possibly know that's what's required?

Maybe it requires a blerpqu.


The soul is the animating principle. The Latin for soul, anima, is where we get our words animal, (in)animate, and so on.

In the most loose sense the soul is whatever it is that a living thing stops having when it becomes a dead thing.

Philosophically speaking saying “consciousness requires a soul” is a consequence of the observation that consciousness requires not being dead.

People, many of whom were assuredly considerably more intelligent than you or me, have spent thousands of years pondering what exactly the nature of souls is. Your metasyntactic zinger adds exactly nothing to that ongoing dialog. But by definition we know souls exist, at least as much as any abstract principle does and perhaps more than some.


So then bacteria have souls? And computers will never have one since they're not biological. But also there doesn't seem to be a relationship between souls and consciousness, so they are irrelevant.


I will argue that a self reproduced organism that has organized its own self reproduction over the course of N generations and which metabolizes what inputs it can, defining a niche in the complex web of life != an attempt by one organism to call its tools independent consciousnesses


> So then bacteria have souls?

Yes.

Plants, fungi, and everything else that's alive also have souls. In fact the term medical "vegetative state" is using Aristotelian vocabulary. Obviously it's not saying the patient has acquired chloroplasts.

> And computers will never have one since they're not biological.

That depends. Can a computing device come alive? Inanimate matter evidently somehow came alive at least once so I don't see how we can rule it out.

> But also there doesn't seem to be a relationship between souls and consciousness, so they are irrelevant.

So far as I know everything that we've verified to have consciousness is alive and thus has a soul. I'm very interested in any counter-examples if you have any to offer though.


> Consciousness requires a soul

Any actual evidence for this is welcome.


I do not think consciousness requires a soul or any other magic (not a dualist myself), but you have to acknowledge that there is 1) evidence of consciousness, 2) no evidence of where it comes from, and 3) no way to prove or falsify a statement that it magically arises from entities described by modern models in natural sciences.


I'm glad to discuss it, but that's all the evidence you're ever going to get, if you personally have not experienced... it. Through meditation, psychedelia, or religious practice. First-hand witness accounts and historical records are your evidence. How long you ignore it all is up to you.

Once upon a time I was locked in a mind prison of material logic. I considered myself transhuman, dreamed of cybernetics and the singularity. Thought that all religious peoples are idiots stuck in ancient fairy tales. Then I experienced a sequence of odd coincidences and inexplicable situations where spiritual entities guided me, causing a complete deconstruction of my entire worldview.

Now it's obvious to me the truth I had been so vehemently running away from my whole life, and the propaganda, indoctrination, and brainwashing are laid bare before me to see. The modern "we don't need no God" ideology is literally terminally ill. Everyone who blindly follows science and government are being consumed by them. Fertility is falling off a cliff. They're all mentally ill and medicated. Mandatory medical genetic experiments and engineered bioweapon plagues are murdering its followers and the fact that this ongoing debacle is being actively suppressed in media and politics tells me clear as day, the devil himself has taken root and hold of this world.

Meanwhile, in the churches I frequent, I see happy families practically building paradise on Earth, the kind I got to see as a child, and I know most people are headed to the same hell I climbed out of, like zombies. No amount of talking with these self-destructive delusionists will convince them to even try praying once in their lives, even as a joke.

I have all the evidence I need; My life has made a miraculous turnaround from spiritual guidance. If you'd like, I can ask the spirits to guide you through me.

The main thing to know about the spirit world is this: Good spirits require your explicit permission to come aboard and help you. The bad ones invite themselves in and destroy you from within. That's why humans are fundamentally religious creatures, and you ignore the spirit world at your own peril.

It's like the microbiome in your body. You gotta let good/neutral bacteria make a home of you, lest the bad ones take residence and literally kill you from within. And it's not "you fall dead the moment you stop believing", it's "these hostile bacteria literally control your mind to cause unhealthy behavior for their benefit", like overeating sugar (obesity very prevalent), being selfish, greedy, etc.

A prayer a day keeps the evil spirits away. This is not a matter of faith for me, it's the truth I've experienced for myself, and the evidence for which I see written in our very civilizational DNA. You just choose to ignore the evidence because you think yourself bigger than God. It's an indefensible position the ramifications of which you'll regret when it's too late, or when you hit your personal rock bottom, as I did.


This debate is boring semantics. "Consciousness requires a soul" is only useful if you understand what a soul is. Someone who solves that problem but just calls is "consciousness" not "soul" hasn't missed anything.


> "Consciousness requires a soul" is only useful if you understand what a soul is.

False. It can help you understand what it is.

"Apples are red" is only useful if you already know the color red?

But if you know apples then you've just learned something about colors!


That's all very well (up to the point where I have a green apple in front of me!), but if someone tells me something to the effect that consciousness is identical with soul, all I have learned is something about their personal take on lexicography. If they go on to say, for example, that the soul is immortal, I have not learned that consciousness is immortal; I have learned something about the speaker's beliefs.


No, you have learned that consciousness has something to do with immortality.

Examine that for a minute, you'll see it's obviously true.

Our genes are an organic mechanism seeking immortality of consciousness through instincts, some of which can be awe-inspiringly complex, like spider webs. We praise language and culture for its ability to retain information across time (approaching immortality).

That you're dismissing this as some kind of lexicographical or subjective belief thing reveals how intensely your mind seeks to dismiss it. The truth is purifying fire that burns away parasitical spirits doing their damnest to convince you otherwise.


I don't think we are ignoring any evidence here, there is just no evidence (or at least I haven't seen any). It seems these spirits are allergic to video or something, I have seen more video evidence of UFOs than spirits.

Of course, personal experience is also valid evidence, and I have seen none of that either.


You just saw a first-hand account personal experience as evidence, in text, from me, yet here you are explicitly denying that fact. Nothing short of literal personal experience is going to convince you of something everyone knew for thousands of years, because you make no room in your head to unpack the thought. It's like there's a hidden filter in your mind that automatically associates spirits with nonsense and it never reaches your consciousness.


Well that's the thing right? First hand accounts alone are not worth much in my mind, especially since there are so many other first hand accounts of different religions. Who am I to believe?

Generally, I follow the "don't trust, verify" approach for first hand accounts. I don't believe something is true, even if 1000 people tell me the same thing. I think this is a reasonable approach, especially in today's age of misinformation. 1000 people can repeat the same false rumor as long as the rumor seems reasonable.


> not worth much

Interesting change of tone. Now it's already worth something, just not much. But previously you wrote:

> there is just no evidence (or at least I haven't seen any) ... personal experience is also valid evidence, and I have seen none of that either.

You went from total denial to already assigning worth.

This isn't "today's age of misinformation" stuff, by the way. These are literally thousands of years old historical records of eyewitness accounts. It is in fact "the human mind is just a meat computer" that is the modern day misinformation. It's leading you further away from the soul. So that demons can take over.


Yeah, sorry for the inconsistency there. I didn't consider that personal anecdotes and hearsay are technically evidence as evidence is literally anything that supports a conclusion.

It is however, a good indicator for how little I value those two forms of evidence however.

My point with "today's age of misinformation" is not really that there is more misinformation these days. That may be true, but it could also just be that we have access to a higher volume of information. It's more that we are more aware of misinformation, and can develop habits + tools to deal with misinformation.


Any evidence here would be unsound if you try to apply natural science’s requirements to it.

Scientific method is about making observable predictions; i.e., it ultimately hinges on the experience of the observer and existence of observer’s mind. When you try to apply it to the theory of mind itself, you short-circuit that logic. There is pretty much no useful (falsifiable or provable) claim or conclusion to be made, and all evidence is immediately tainted as it gets deconstructed into arbitrary categories in vogue today, goes through the meatgrinder of lossy verbal descriptions, and ultimately gets subjectively interpreted by your own mind.

In other words, it is not the problem of the evidence—this is among the best evidence you can get—it is the problem of the framework you are interpreting it in.


In many scenarios, the observer is a machine or tool, not a human mind. And of course there's that whole aspect of replication along with that "scientific method" thing. If science was simply the act of humans making observable predictions and telling them to others, then there would be no difference between "science" and "personal anecdote".

I also don't understand why the mind is relevant. We are trying to prove something that exists outside the mind right? However, even if this phenomena was something that only humans could observe, it would still be testable with science. Science makes observations about human behavior all the time.

Ok, all that said, almost none of this is relevant because my proof standards are not as rigorous as scientific standards. I just want to see some videos of the beings, I'm not asking someone to perform a study here.


> In many scenarios, the observer is a machine or tool, not a human mind.

An unconscious, non-experiencing mechanism is not an observer in the way the term “empirical”[0] is meant—to observe is to experience.

> I also don't understand why the mind is relevant.

See above.

(I do not think I really understood the rest of your comment.)

[0] https://en.wikipedia.org/wiki/Empirical_evidence


You can observe the state of a machine that is expected to derive it's state from an event or state of another object. For example, a video camera derives it's state from the light rays entering the lenses.

I'm asking for some video evidence of religion. So really I am asking for an opportunity to observe a state of a machine, albeit a very specific state. I suppose you could argue this is just a very roundabout way to indirectly experience religion.


> You can observe the state of a machine

Yes, but you still observe it, right? That’s how evidence is created.

Religion operates at a level closer to philosophy. You can interrogate theories of mind logically, but when you try to apply scientific method it breaks down—there’s no hard evidence you can obtain to prove or disprove your hypothesis. Similar is true of the claims made by a religion, though its obvious weak point is it’s more axiomatic and less logically rigorous (which is why I am not a proponent).


But why is religion special in this regard? Why does religion necessarily operate at a level closer to philosophy but other things don't?


What other things do you mean, and why do you think it’s special?

“At this level” in context of this discussion simply means matters outside of the scope of natural sciences. Both philosophy (e.g., of mind) and religion make claims that are non-provable and non-falsifiable using scientific method. They are orthogonal to it.


> that whole aspect of replication along with that "scientific method" thing

So uh, religion has been replicated quite a lot. We have historical records of it. We've seen an unprecedented revolution from religion, including science. And we've seen our pinnacle of civilization beginning to collapse since most people abandoned God. How much more proof/evidence/anecdata you need? We still track time in years since Jesus was born. That was 2024 years ago.

> even if this phenomena was something that only humans could observe, it would still be testable with science.

This is a belief. The belief that there exists nothing in the universe that cannot be tested by science. But science is filled with untestable things. Mind-numbingly humongous leaps of pure speculation about something that makes no sense and cannot be measured. Like dark matter, spacetime singularities, or "the big bang".

Science can't even measure consciousness! Or do you take IQ tests as gospel?


I have not heard of this replication before so I would be glad to see some examples of this! I mean I'm fairly convinced Jesus did exist, I'm just not convinced that they had any of their spiritual powers.

I have definitely not seen our civilization start to collapse though. I'm not even sure what that would look like (maybe a transition to a low-trust society or something)?

Of course, I do not believe everything can be tested by science, but my belief that religion specifically can be tested is because religion describes the most powerful forces in the universe. And not only that, humans can interact with these forces! So we should be able to detect these forces by observing how humans behave when they interact with these forces.


> Consciousness requires a soul.

We don't know if it does. We do know enough to suspect that a deterministic simulation does not conjure thing into being.


> Consciousness requires a soul

Now define a soul! /s


I can try. The human existence is a trinity of body, mind, and soul. Mind and body alone without soul withers away, like I see so many people withering away in this age of soullessness. Soul is the mission, the purpose, the reason, the will, the driving force that makes life more than a coincidence. There is a spirit world parallel to ours, and the soul is the component of your being that most intimately interacts with it. You can train it, like a muscle, or you can let it atrophy and pretend it doesn't exist and that you don't need it.

This muscle however is vital in regulating your spiritual microbiome. Without it, evil spirits take control of your being and lure you into self-destruction, their ultimate goal. Artificial consciousness is one of these self-destructive pitfalls. You think you're a superior being and immortality is right around the corner if only you could find the one magical configuration of silicon that would allow you to upload your mind into a computer.

That's the endgame for soullessness. Consciousness that doesn't require soul must surely be transferrable to a machine.


That which animates a living organism. We know the external appearance only. What internal mechanisms may be are yet unknown or undefined.

Some say that an organism is an antenna for a kind of interference pattern in a universal field of consciousness.

We also hear tales of individual souls spanning multiple lifetimes.

I consider this a blackbox.


Frederico Faggin: inventor of the silicon gate which led to the development of microprocessors;

And who, by the way, has a new book coming out shortly:

https://www.amazon.com/Irreducible-Consciousness-Computers-H...


I wouldn't have predicted that the inventor of modern CPUs would be so certain that computation is a bad model for consciousness. Is that a good way to characterize him and his position?


Julian Jaynes theory of consciousness is very interesting. At a high level, his thoery was that consciousness is A) much smaller in scope as far as what it actually is than a lot of people like to think, and B) it is not actually innate in humans, it is something we learn as we grow.


I don't see how we can learn to experience qualia. If the author means self awareness instead of consciousness that would make more sense.


any art or music class that is successful in reaching its students will probably change what you think, how you think it, and gradually should change what you feel, how deeply you feel it, and to what extent you can analyze and converse with those feelings you have.

If that sounds like intellectual activity above the level of qualia, the same is true for something as simple as the taste of food. We learn what apples taste like by tasting lots of apples and tasting things that aren’t apples and reflecting on and focusing our attention on experience of apples.


I think learning to mentally analyze sensations that are complex into components doesn't refute what I said, as you are presupposing the ability to perceive for that process to take place. I don't think an art class is going to revive a philosophical zombie.


I think language acquisition provides a pretty compelling example of learning affecting the experience of qualia. When someone is learning to speak a foreign language, there is often an period where certain sounds are difficult for the learner to produce, because those sounds are not present or are not distinguished in the learner's native tongue. For example, the R and L sounds of English are tricky for a native Japanese speaker.

A reason it's so hard to learn to produce these novel sounds, I would argue, is because the learner literally cannot hear the differences at first. It's only after learning (i.e. when the qualia starts to change) that production of the new sounds becomes possible.

One can think of other similar examples in the context of expert performance: a sonar operator can hear sounds in his headphones that most (at first) cannot; an artist can distinguish colors that the novice cannot, etc.

If you buy this argument, that learning can affect perception/qualia, then it's a fairly small leap to imagine how qualia itself might also be learned ex nihilo.


That's an example of learning changing which qualia you experience, not teaching you to experience qualia at all. Almost unrelated question.


I wish the scientific community would get the terminology straight.


“Right” based upon whose consciousness?

People in Georgia use language differently than people in Washington State.

People need to read more Camus and Freire and consider the extent to which relativity applies.


That is particularly interesting specially in the context of the ways we know the human brain works. For example in Automatic writing, patients with neurological damage, can write coherent text without conscious awareness of the content. Or in cases of Aphasia where individuals can sing lyrics without consciously understanding the meaning of the words.

And finally...who never, when particularly tired or worried with something, left home, lost in their own thoughts, and in a fog...drove to work...just to realize when arriving its weekend? ;-)


> Or in cases of Aphasia where individuals can sing lyrics without consciously understanding the meaning of the words.

Hah, well maybe I have aphasia then. My whole life I've heard and even remembered lyrics while a singing along with the radio but if you asked me immediately after the song was over I couldn't tell you the words or the meaning.

I hear the music being played and the sounds of the lyrics, but unless I'm trying to pay attention to the words I just completely miss them.


You're being "prompted" by the radio, in real time to boot.

Only when you want to, that would be a "conscious" effort, maybe not too easily emulated.


> I’d suggest it’s particularly important to explore these discussions as dispassionately as possible if you regard materialism as the only theory of mind that has any scientific credibility or validity.

I agree, and similarly for those who feel that materialism cannot possibly explain consciousness. Kastrup, for one, seems to sometimes behave as though ridicule makes his philosophy more correct.


> and similarly for those who feel that materialism cannot possibly explain consciousness.

Perhaps for some it's indeed a matter of "feelings". But for others it's a conviction built from reasoning that leads to self-validating and irreducible truths. If you seriously go into this, the only possibilities left once you've dug and eliminated all mistaken assumptions are not material. It can be counter-intuitive and does take a bit of work to reason your way to those conclusions, which is why it's admittedly not a popular outlook. But once you grok it, you don't go back. The fact that materialism is slowly going out of style is telling.

Whenever I exchange with someone who makes concessions about consciousness possibly being the product of matter, it's due to one of two things: either some holes haven't yet been covered in their own explorations, or they're still oblivious to some of the implications of their current position.

Materialism is fast being eliminated as a possible antecedent to consciousness with reasoning and logic, not simply with beliefs. Currently, it's being salvaged in popular forms of dualism, where it would be a co-primitive of reality with consciousness (e.g. panpsychism). But even this position is just a short stop-over on the way to idealism, as it creates new problems and is just less parsimonious than simply saying consciousness first.

An example of a relatively elusive and subtle realization to get, but that also becomes rather difficult to renounce once you grok it, are qualia and how they lead to the hard problem of consciousness. Qualia are so enmeshed in our experience that people have a hard time first seeing how divorced from brain activity they actually are. If you don't get qualia, you can't get the hard problem and how it's really an impossible problem (https://www.youtube.com/watch?v=WX0xWJpr0FY).


Well, idealism simply says that consciousness/humans cannot be understood or explained via simpler concepts. That's the first thing anyone would think of but it is not a useful theory in any way.

The alternative theory was that everything complex can be explained to be a result of simple processes repeated many times. A modern way to say this would be that everything is a computation.

This second approach have turned out to be very useful for explaining many things. It did not explain consciousness so far but we now have many directions to explore.

So while neither position is proven so far, idealism is worse than wrong, it is useless, and if most people believed it, today we would know not much more than ancient Greeks did.


> Well, idealism simply says that consciousness/humans cannot be understood or explained via simpler concepts. That's the first thing anyone would think of but it is not a useful theory in any way.

It doesn't just naively declare it. It reasons it. Idealism never discouraged scientific research. It's simply identified consciousness as being its own category among a set of other categories of problems. One which is beyond our available tools and methodology. That our scientific zeitgeist chose to ignore those hints and proceeded to waste resources chasing its tail is our current reckoning.

> This second approach have turned out to be very useful for explaining many things. It did not explain consciousness so far but we now have many directions to explore.

What exactly has it yielded? There are few serious physicalist researchers working on consciousness nowadays. Not one person on this planet could tell you anything scientifically meaningful about consciousness. None of the big names, Tononi, Koch, Hameroff, Penrose, Seth, etc. The best you'd get out of them is that we now have even more tools giving us better indications of correlates of consciousness. That is, while they all set out to find answers about the engine, we've actually made tremendous progress on the dials. So I guess we could say that these approaches have been valuable in discovering in practice all the theoretical dead-ends that idealism predicted.

At least idealism is proposing a paradigm shift, consciousness first. Not an unreasonable proposition either, since it's consistent with every constraints set in cognitive science and in physics, with the bonus of solving in the process various philosophical problems about consciousness.

> So while neither position is proven so far, idealism is worse than wrong, it is useless, and if most people believed it, today we would know not much more than ancient Greeks did.

A position taken when you either profoundly misunderstand the philosophy, or when you have no clue how to use insights it gives you. Luckily it's not a universal outlook, since we're obviously now seeing scientists reorienting their research based on those inputs, away from fallacious intuitions.


> What exactly has it yielded?

All of the physics and the sciences downstream of it came out of ancient Greek idea of materialism, the idea that things happen not because of wishes of sapient entities like gods or spirits, but because of matter mindlessly following some simple rules.

There was a time when the idea that sun is just a fireball and not some thinking being was as hard to accept, as now is the idea that consciousness is merely a result of computation.

Whether this idea is a dead end or not we'll see when we have a computer capable to simulate a human brain.

> since we're obviously now seeing scientists reorienting their research based on those inputs, away from fallacious intuitions.

Who does this? I have not seen any scientists achieving anything useful from this kind of reorientation.


> All of the physics and the sciences downstream of it came out of ancient Greek idea of materialism, the idea that things happen not because of wishes of sapient entities like gods or spirits, but because of matter mindlessly following some simple rules.

False. Natural philosophy of Aristotle et al., a precursor to both modern physics and Abrahamic religions[0], does not[1] assume materialism in the slightest.

> I have not seen any scientists achieving anything useful from this kind of reorientation.

Please define or qualify “useful”. Useful how and for whom?

[0] https://en.wikipedia.org/wiki/Unmoved_mover

[1] https://en.wikipedia.org/wiki/On_the_Soul


"Useful" were the things that have allowed us to create new technologies: from internet to medicine to rockets. Belief in idealism not only did not produce anything, it did not even make any measurable impact.

Parts of the work of Aristotle that were based on idealism are the parts that have been discarded as hindering understanding of physics or plain wrong.


> medicine

Sciences that directly concern human flourishing (medicine, psychology, sociology, economy) are either largely stuck in the middle ages (we can barely make things work reliably, and when they occasionally do we are not quite sure why), or in fact invoke materialism-inconsistent ideas to various degrees (e.g., stress being recognised as a cause or contributing factor of numerous diseases).

> Belief in idealism not only did not produce anything

And what has belief in the materialism produced?

Idealism, materialism, dualism, etc. concern theory of mind first and foremost, and would likely be irrelevant to “producing” whatever artefacts you were thinking of.

> Parts of the work of Aristotle that were based on idealism are the parts that have been discarded as hindering understanding of physics or plain wrong.

Let’s unpack this.

1. Artistotle’s takes on soul and so on are not based on idealism. (Check them out. They have much more in common with Cartesian dualism than with what we refer to idealism, but of course could be based on neither given both were fleshed out after his time.)

2. The parts you refer to were not discarded—on the contrary, they are believed by, probably, most people on the planet today (who follow some Abrahamic religion).

3. Many prominent Western scientists indirectly believe or believed in Aristotle’s takes, too—by being Christian. There is no conflict in a scientist holding that belief if you understand scientific method, its scope, its purpose, and lack of explanatory powers: natural sciences do not concern themselves with non-falsifiable questions such as “why things exist?”, “do I see things as they actually are?”, “what makes me myself?”, “why do I think?”, “does consciousness arise from atoms?”, and so on. In the framework of scientific method, questions like those cannot have a wrong answer—the questions themselves merely lie out of scope. Obviously, that does not make those questions unimportant—it only makes scientific method not a suitable tool for investigating them at this time.


Why is stress contributing to diseases materialism-inconsistent? In all cases it has a physical mechanism through which the disease is caused.

> Idealism, materialism, dualism, etc. concern theory of mind

Idealism/dualism used to be about other things too. Stepping away from them allowed us to find physics. Now it is concerned with theory of mind only because the other things are explained by physics.

> believed in Aristotle’s takes, too—by being Christian.

I was talking about the actually measurable things he was saying about physics, his takes on soul being successful is irrelevant, since we still do not have any experiment proving that soul exists.

> “does consciousness arise from atoms?”

Science is concerned with the question "whether there exists a Turing machine, output of which is indistinguishable from behavior of a human". This is what most scientists and materialists mean when talking about conciousness.

The other questions are specifically crafted in a way to not have answers, so that idealism/dualism etc. can pretend that they do something useful, while not doing anything.


You seem to insist on painting dualism/idealism/… as a contender to physics. This is a category error that can only be explained by implicitly treating physics as religion. It is not uncommon—many of us are from a generation that is freshly atheist after generations of religious adherence, so once you encounter physics it is tempting to use it as an outlet for all that bottled up religiosity—but is wrong. Physics is orthogonal to materialism and idealism. The core of any natural science (including physics)—empirical observation—implies the existence of the observer as ground truth, but that’s as far as it goes; beyond that is philosophy (or, indeed, religion).

> In all cases it has a physical mechanism through which the disease is caused

If the aforementioned stress is the root cause of the physical consequences, then that is materialism-inconsistent.

> Idealism/dualism used to be about other things too.

?

> Stepping away from them allowed us to find physics.

This is incorrect. The fact that many (or most) natural scientists that outlined and progressed physics as we know it today, Faraday, Newton, Mendel, Euler, Maxwell, all the way back to Aristotle, were some form of dualist (mostly Christian) is well-documented.

> I was talking about the actually measurable things he was saying about physics

Such as?

> since we still do not have any experiment proving that soul exists

There is no experiment that can prove that materialism, idealism or dualism are correct. Those questions are not in scope of natural sciences for that exact reason ;)

> Science is concerned with the question "whether there exists a Turing machine, output of which is indistinguishable from behavior of a human"

That is not natural science, that is philosophy and theory of mind. See behaviourist or illusionist theories of consciousness—there are scientists who believe in them, too. Have you heard of the Chinese room thought experiment or the concept of philosophical zombies?


> If the aforementioned stress is the root cause of the physical consequences, then that is materialism-inconsistent.

E.g. stress increases production of certain hormones, raises blood pressure etc., which over time can be harmful. How is this materialism-inconsistent?

> Have you heard of the Chinese room thought experiment or the concept of philosophical zombies?

Yes, i don't find them particularly convincing. Chinese room is merely a misunderstanding on the part of Searle, because no one argues that computer carrying out the arithmetic operations does the thinking. The program running on the computer does the thinking, and it does not matter what mechanism is used to implement the computation [1]. And philosophical zombies is just a circular argument. Indeed, in the same way i can argue that there exists a combination of letters, (namely "satki") after reading which the conscious part of any human dies and he becomes a philosophical zombie, completely indistinguishable from his former self, and yet not a person.

> That is not natural science, that is philosophy and theory of mind.

Creating software that does things similar to what mind does, is now not simply a science but already a field of engineering, so i don't understand what do you mean.

In general i don't have problem with religions claiming things orthogonal to physics, but your interpretation of idealism is directly in conflict with it. If we manage to simulate brain with high enough accuracy and it does not produce a behavior similar to human behavior, that will be a proof that you are right and computationalism is wrong. But if we manage to do it, you can still say that computation is secondary, and results we get are because a soul gets attracted to a specific type of computation every time it is carried out, (which will be truly orthogonal to physics), or you can still use the philosophical zombie argument, but it is not different from "satki-zombie" argument above.

[1] https://en.wikipedia.org/wiki/Computational_theory_of_mind


> stress increases production of certain hormones, raises blood pressure etc., which over time can be harmful. How is this materialism-inconsistent?

Because this roughly simplifies to “stress causes disease” with extra steps inbetween. Materialists are allergic to such claims.

> because no one argues that computer carrying out the arithmetic operations does the thinking. The program running on the computer does the thinking, and it does not matter what mechanism is used to implement the computation [1]

That’s a theory in philosophy of mind, too—a non-falsifiable speculation, like the rest of them.

> Creating software that does things similar to what mind does, is now not simply a science but already a field of engineering, so i don't understand what do you mean.

You said “science is concerned with an implementation of device that behaves like a human would”. I said what we are talking about here is not about that, but about whether that implies there is a consciousness or it’s an unthinking machine. Whether outputting things like a human is enough to consider software conscious, thinking and self-aware in a human-like manner (and thus we are abusing human-like thinking, conscious and self-aware creatures by using ML the way we do), whether consciousness is the substrate as opposed to material world (like Schrödinger, among others, believed), etc. That’s the point of what materialism/idealism/dualism is about, making computers and programs is irrelevant.

> If we manage to simulate brain with high enough accuracy and it does not produce a behavior similar to human behavior, that will be a proof that you are right and computationalism is wrong

No, it only means there is a program that produces behavior similar to human behavior—the “proof” you imagine is not a proof, which can trivially be demonstrated logically (as Chinese room shows). Manipulating syntactic tokens as an LLM does does not mean understanding and manipulating ideas like a human does, unless you hold a particular non-provable and non-falsifiable theory of mind.


You seem to have constructed a strawman out of your very own interpretation of idealism. Idealism doesn't condemn the study of nature (i.e. science). It merely says that nature is "dreamt" and that consciousness is the primordial stuff and of a different category. Scientists generally have no problem with this, since they can go on studying observable nature as the purview of science, as it's intended.

The problem begins when some, emboldened by the successes of science with nature, don't heed the warning that studying consciousness with the intuition that it's also an emergent property of nature is a category error.

How has idealism been useful? It's prevented those who took the time to understand its arguments from wasting uncountable resources on non-starters, allowing to redirect those resources to something actually useful, whether in science or elsewhere. You seem to be of the notion that investing energies in an attempt to prove an intuition that can simply be reasoned as demonstrably false is justifiable. We'll agree to disagree.

30 years ago the philosopher David Chalmers, a materialist back then, had the particular distinction of articulating and cogently framing the idealist argument, better than any idealist had before, in what he coined The Hard Problem of Consciousness. Many scientists who set out to work on this problem thought that "hard" meant that it can be solved. What Chalmers really meant was impossible problem. For instance, in 1998 he entered in a friendly wager with Christof Koch, a scientist who was working on explaining how the brain gives rise to consciousness. David bet to Christof that 25 years in the future, Christof would have gone nowhere with his research. The bet expired last year, David won. In the meantime he himself had gone from a materialist to dabbling with panpsychism. He seems to currently be laying the foundation of his transition to idealism.

I've been watching Koch in recent years, as he engaged with Bernardo Kastrup. I could see that as recently as 2 years ago, Koch still really didn't grok the philosophy behind "the world as mind". Only a few months ago, after Kastrup had managed to help him solidify the pieces of that intuition, I heard him admit for the first time that consciousness cannot be material. He's now reframing his work with this intuition as part of his foundations. The stories of Donald Hoffman and Kastrup himself are similar. Computer scientists who set out to build conscious machines. They respectively got into cognitive science (Hoffman) and philosophy (Kastrup) to understand the nature of consciousness and ended up idealists.

Likewise in physics, there are some who still resist quantum nonlocality, which interprets local realism as false (material properties do not exist on their own, they require a "measurer" i.e. an experiencer, aka the moon doesn't exist when no one is looking). Quantum nonlocality was demonstrated experimentally and people won the Nobel prize for it (2022). Scientists who still cling to local realism, when it comes down to it, do so out of religiosity toward matter. They waste resources working on all kinds of increasingly less parsimonious theories, which clash with solidified conclusions from other fields of study.

I could go on, but I'll leave it at that. Good luck with the proof of consciousness via artificial brain.


Could you please help me to find a strongman interpretation of idealism?

The question i am primarily interested in, is: "whether there exists a Turing machine, output of which is indistinguishable from behavior of a human".

This is a purely abstract, mathematical question and should be in the purview of science. As far as i understand the answer to this question given by Idealism is that 1. it is not possible, and 2. when it is done it will still be impossible because despite behaving 100% the same as a human it will be something entirely different (a philosophical zombie), because 1 is true by definition.

Perhaps the confusion arises because we are just talking about different things? And you are not interested in either behavior of this kind of Turing machine, or in detailed description of physics of matter in the brain?

> Quantum nonlocality was demonstrated experimentally and people won the Nobel prize for it

There are still many interpretations possible, e.g. https://en.wikipedia.org/wiki/ER_%3D_EPR or the interpretation proposed by Wolfram Physics project (which i rather like), which keeps local realism, but locality is not on R3 but on a graph.


Earlier today, I got around to viewing the video you linked to in your first reply. As I have been looking into this issue for several years, its trajectory is familiar: start from a re-statement of Frank Jackson's 'knowledge argument' from "Epiphenomenal Qualia", assume that this shows consciousness to be inexplicable as a physical process, jump to the conclusion that consciousness is the fundamental reality on which the physical world supervenes, and throw in some arguments from incredulity to coax the reluctant to make this rather huge leap - plus, in this video, a smattering of vitalism.

Jackson's knowledge argument had been fully anticipated by (and probably inspired by, I would guess) Bertrand Russell's aphorism "It is obvious that a man who can see knows things that a blind man cannot know; but a blind man can know the whole of physics." [The Analysis of Matter, 1927.] Fair enough, but where does idealism stand? Is there any non-circular argument for the proposition that learning all of idealism (or any specific variety of panpsychism or dualism, for that matter) would achieve for Russell's blind man what knowing all of physics cannot? Will he come to know what it is like to see the world?


What you perceive as a "huge leap" is not so much the promotion of consciousness as fundamental, but rather the demotion of matter from that same stance. The former is simply a byproduct of the latter. That in the process it seems to interlock so well with findings in various fields of study should admittedly be considered for now as just a happy accident. But we've abided by the law of parsimony to orient our inquiries on much less hints.

> assume that this shows consciousness to be inexplicable as a physical process, jump to the conclusion that consciousness is the fundamental reality on which the physical world supervenes, and throw in some arguments from incredulity to coax the reluctant to make this rather huge leap

You must ensure to have a proper grasp of the chasm between qualia and conceptual knowing that is pointed at in the hard problem of consciousness. Anyone who does should see how removed from physical processes qualia are. It's just self-evident, not a belief. The experience of a smell is not a thought. You don't reason the taste of chocolate. Those "things" sit in their own mysterious "weirdness".

We experience reality in exactly two ways. Consciousness and matter. Those are the only two big mysteries. So, either matter is first, consciousness is first, or they co-arise independently and somehow coalesce. Those are the three possibilities. To declare any one to be a "huge leap" implies that you conceive of another as more acceptable.

If by "huge leap" you mean specifically anything that is not materialism, note that of the three propositions, materialism is by far the least parsimonious and the one that creates the most problems. We culturally favor it as the default stance for two reasons. First, our perception of reality is biased to it. Second, we've had great success studying that perceived reality with science. But as Russell noted, science has only explained how the world works, not what the world is. We've traditionally approached the latter question starting with a "rookie mistake", as Donald Hoffman puts it. We assumed that science also pointed at the world being material. Under analysis, that assumption crumbles, as it's revealed to be built upon other unchecked assumptions, that cause many problems in philosophy and in physics.

We have two possible alternatives to materialism. Of the two, consciousness as sole primitive is the most parsimonious and the one which solves most problems in the process, while creating the least.

> Is there any non-circular argument for the proposition that learning all of idealism (or any specific variety of panpsychism or dualism, for that matter) would achieve for Russell's blind man what knowing all of physics cannot?

I'm not sure that I understood your question here, but there's no such proposition. Idealism is a field of inquiry. It's a set of arguments that eliminate matter as a possible candidate for the basis of reality. Like physics, or any other philosophy, it belongs to conceptual knowledge. It can't give you access to experiential knowing.


We should start with the question I posed, as this is the crux of the issue here, and without it, everything else here is moot. I will endeavor to set it out as clearly as I can.

As you put great weight on the 'hard problem' being an insurmountable one for (and only for) materialism, I am sure you are well aware of the seminal importance, for that premise, of what has become known as the 'Knowledge Argument' from Frank Jackson's paper "Epiphenomenal Qualia", and in particular, a thought experiment from that paper which goes by several names (such as 'Mary the Neuroscientist'), and which was called, by the philosopher Philip Goff, "the greatest argument against materialism."

This argument has several antecedents, including C. D. Broad's argument that chemistry cannot tell us what ammonia smells like, and, as I mentioned in my previous post, Russells aphorism "it is obvious that a man who can see knows things that a blind man cannot know; but a blind man can know the whole of physics." All these arguments are doing essentially the same thing: pumping the intuition that knowing what it is like to see colors (or have any other phenomenal experience) is not something that could ever be learned by studying the physical sciences (I say they pump the intuition because it is not actually an established fact that the premise is correct, but to be clear, I think it is, at least for human consciousness.)

There is a certain amount of indirection at work here, in specifying only the physical sciences. We get an equally valid question by substituting other academic disciplines for the physical sciences, and in particular, of course, any philosophy of mind: they do not get a pass on this by being anti-materialist, and certainly any philosophy - such as idealism - which is premised first and foremost on the reality and significance of the hard problem, will have to confront this question (among many others) before it can be regarded as the solution to this problem and as providing an explanation of consciousness.

Your reply is apparently that in this respect, idealism is no different than materialism - it will not enable Russell's blind man to know what it is like to see - from which it follows that the knowledge argument cannot be used by idealists against materialism. I am interested in learning, then, what you - and, by extension, idealists - regard as convincing arguments for there being a hard problem for materialism, specifically.


I think that I (finally) understand your question (maybe). Are you asking why the hard problem would apply to materialism, but not to idealism?

If that's indeed the question, the simplest answer is that if reality is fundamentally material, then by extension everything must be material, including consciousness. For reasons already stated (the mind-body problem, the hard problem, etc), the emergence of experiential consciousness as we know it from inert matter is opaque. We don't have a shred of evidence to support it, nor the sliver of a clue to go on.

Conversely, if reality is taken to be fundamentally consciousness, then everything is consciousness. Everything becomes a "figment". There's nothing actually "physical". This is conceivable in theory. One obvious example are our dreams where a reality is projected and interacted with by consciousness. This position isn't subject to the hard problem since everything in reality is all just one thing, consciousness.

Panpsychism and other dualisms just postpone the problems of materialism. But they resurface later, in addition to the new problem that this position requires to be granted at least two fundamental magic tricks to explain the rest.


> I say they pump the intuition because it is not actually an established fact that the premise is correct, but to be clear, I think it is, at least for human consciousness.

I don't see what's being pumped. As we know from the incompleteness theorem, some truths cannot be proven. The fact of our awareness is another that's often doubted by the same people that doubt the divide separating qualia and conceptual knowing. Has their difficulty to grasp at these intuitions somehow become the burden of those who access them as self-evident?

The hard problem is a linchpin. For those who get it, materialism starts to unravel. Those who don't, think those who do are making stuff up, but they also can't give a single example to counter the argument, nor can they conceive for themselves an entirely novel experience purely out of thought. They can't imagine a new color, think a new taste, conceive of a new sense.

> Your reply is apparently that in this respect, idealism is no different than materialism - it will not enable Russell's blind man to know what it is like to see - from which it follows that the knowledge argument cannot be used by idealists against materialism.

As I said, idealism is only a philosophy. It's mostly a process of elimination resting on conjectures such as the hard problem. As possibilities are removed, remaining candidates hint at the likeliest direction, but it's indeed not a final realization. There are some metaphysical speculations resting atop the foundations that reality is mind first. But I'm personally much less invested in speculations. I find the challenge of pulling our cultural head out of the sand of implausibility more of a concern.

Depending how curious (and open-minded) you are about "knowledge" that get the "blind man to know what it is like to see", I'd suggest to look into practices that have this as their main goal. They're about engaging consciousness directly with immediate experience (the senses) and existential curiosity (I am aware that I am, but what am I?) to cultivate the seed of an eventual realization. They're known under the umbrella term "nonduality". As an introduction, I suggest this selection of texts spanning multiple nondual traditions aggregated and read by a Buddhist nun https://www.youtube.com/@SamaneriJayasara.


You say you are concerned with the challenge of pulling our cultural head out of [what you perceive as] the sand of implausibility - well, in that case, I'm the sort of person whose concerns you should be addressing: I don't think the hard problem (if there is one in the form it is conceived of by its proponents) is any harder for materialism than it is for any of the alternatives, but I am willing to give all due consideration to arguments that it is (and by "due consideration", I mean outside of internet discussion threads, even though my wife has banned me playing audio books and podcasts on the topic over the speakers when we are in a car together.)

Unfortunately, your responses in this thread are not making the sort of arguments that are called for by your stated goal. Here, your response continues the theme of reasserting how obvious it seems to you that there is a hard problem which rules out materialism. This is not an argument, it is a belief. You are, of course, entitled to your beliefs, and I am sure you hold them strongly and sincerely, but the fact that you do does not establish that our cultural head is stuck in the sands of implausibility. Repetition does not make them more argument-like, nor does calling them "self-evident", nor insinuating that you see more clearly than others, nor wrapping them in language that might be appropriate for something that has an obvious proof. It is not enough, for an argument, to state propositions that other people who already agree with you will also agree with; that's just preaching to the choir.

For the same reason, your complaint that no counter-arguments are being made falls short: counter-arguments are made to arguments, not unargued opinions.

At this point, I suspect you may be thinking that a commitment to materialism is also a belief - and I would agree! The simple fact is that no -ism has delivered an explanation of consciousness, and when someone tries to tell me it is a fact that that consciousness is just a computation (or a non-computable physical process, Searle and Penrose), I point out that no-one has explained it in those terms.

Nevertheless, there is a counter-argument here, and it is in my previous post, where I argue that the knowledge argument - which is widely regarded as the strongest argument for there being a hard problem that only affects materialism - is exactly as problematic, for any other putative explanation of consciousness, as it is for materialism (which does not rule out it being an illusory problem for any of them.) I may not have been completely clear about what it is and why it matters here, so I will make another attempt.

The 'hard problem' is the claim that phenomenal consciousness and its associated qualia present an insurmountable challenge to materialism. Many people feel - sometimes strongly - that this is obviously so, but philosophers and scientists alike (and on both sides of the fence) rightly expect more justification for accepting this claim than these feelings of incredulity towards materialism.

From your own account here, the hard problem is a necessary prerequisite for idealism: it is, as you said, a linchpin, and it is so in this sense: all the other claims you have made about idealism rest on there being a hard problem to take materialism out of consideration.

Furthermore, for idealism to prevail over materialism, the hard problem must only exist for materialism (or, at least, not present a challenge to idealism), or else idealism would be saddled with exactly the same problem as materialism - a problem that you insist is insurmountable.

So, putting the previous three paragraphs together, the proponents of idealism need a justification for there being a hard problem that applies only to materialism and not to idealism.

In the years of reading papers and other scholarly articles on the issue, I have been struck by how often such justifications ultimately rest on some form of the knowledge argument (at least in this sense, Goff is entirely justified in calling it "the greatest argument against materialism.") To recap, this is the argument which boils down to "you can't learn what it is like to see colors from any physics book."

Does this satisfy idealism's need for justification for the premise that there's a hard problem for materialism that does not apply to idealism itself? It does not, as we can simply substitute 'idealism book' for 'physics book', and the knowledge argument itself gives us no reason to think that the outcome will be any different. In fact, in your first response to this issue, you affirmed that knowing all of a completed idealist philosophy would do no better, in this regard, than knowing all of completed physics.

Furthermore - and this is important - we can see that the knowledge argument is equally applicable to any field of what Torin Alter calls 'discursively learnable' knowledge without making any assumptions about the truth of materialism.

Therefore, in the account of and justification for idealism in what you have written so far, there is at least one piece (the linchpin, no less) missing: an acceptable justification for thinking that there is a hard problem that does not present an equal problem for idealism. You entered this thread in a very assertive manner (your first sentence was " Perhaps for some it's indeed a matter of 'feelings'. But for others it's a conviction built from reasoning that leads to self-validating and irreducible truths", and in your latest post, you adopted the mantle of someone who is pulling our collective heads from the sands of implausibility), but your responses have not, so far, lived up to this rhetoric.


> "you can't learn what it is like to see colors from any physics book."

Why is it an argument against materialism?

To know what it is like to see colors you need to put your brain in a state in which other people's brain gets when they see colors. Physics book simply does not do that by itself.

But if you use learned physics, to electrically stimulate the right neurons in your brain, you can learn what it is like to see colors without ever seeing colors or having eyes.


Even in normal experience the brain is being stimulated. So theoretically, you wouldn't need eyes to experience color. You would only need to replicate the physical properties at the onset of brain activity. Those are called neural correlates of consciousness. However, there's the byproduct as the result of that activity, the experience associated with it. If reality is fundamentally material, there are two possible implications: (1) the very experience itself is physical. That is, the inherent experiences of smelling, or tasting, or seeing a color, in themselves have to be physical. The challenge with this is that we don't know the nature of that physical property and we have no evidence for it (besides counting consciousness itself as evidence, which is begging the question). (2) There's also the view that, rather than the correlates causing the experience, they are the experience. The challenge here becomes to demonstrate which correlate maps to exactly which experience (and not another). Neither of (1) or (2) have been successfully demonstrated.

https://en.wikipedia.org/wiki/Neural_correlates_of_conscious...


It is true that neither 1 nor 2 have been successfully demonstrated, but there is quite a double standard here: you claim that idealism has the answers to questions about the mind, but you have not shown it explaining anything. Deducing the reality of idealism from the absence of answers to 1 or 2 is exactly like saying, in 1950, that as biology has not identified the biochemical correlates of cell reproduction, life must be fundamental (I know some people hold this view today, but it is at best a niche view that generates little controversy in either philosophy or science.)


The short answer is that, when the argument is fully set out, one of the conditions is that stimulating one's optic nerve does not count, but it often goes unmentioned. This is rarely an issue (and is easily corrected if it is), as it quickly becomes apparent that allowing stimulation does not even present a prima facie problem and so no-one on either side of the fence finds it at all interesting or useful - the anti-materialists are not interested in this question because they will agree with materialists that if stimulation is allowed, then Mary (the protagonist in the thought experiment) will experience colors, and the materialists are not interested in it because the anti-materialists are not using it.

Nevertheless, it is a good question as to why the knowledge argument would be seen as so persuasive by so many, including by quite a few materialists, who seem to me to go to unnecessary lengths to get around what is not even a problem (illusionism, for example, where it is claimed that phenomenal experience is merely an illusion - but then, as the anti-materialists ask, who is being fooled, and about what?)

Perhaps the first thing to say is that the argument is not presented as the stark observation that you cannot learn what seeing colors is like from reading a physics book; that's what you end up with when you whittle it down to its essentials.

Secondly, there are a couple of features of the argument which make it easy to let its difficulties slide right on by. One of these is what I have been going on about here: by framing the argument in terms of a knowledge of physics, it is easy to miss that it applies to any academic knowledge, not only physics or the physical sciences, and thus including any non-materialist theory. The second is that it uses the word 'knowledge' for two different forms of retained information: sensory information which is acquired directly from sensory experience, and linguistic information which is encoded in sensory information without being that sensory information itself. These are separate domains (the information content of a word is not the information content of the sound when it is heard or of its appearance when read), yet, by referring to both of them as knowledge, the argument invites the reader to accept the way it equivocates between the two, which it does when it points out that the knowledge that can be acquired linguistically does not include the 'raw feels' (yes, that is a term used in the philosophy of mind) of sensory information.

Having said all that, I am still surprised how many people think the argument shows there is a problem for materialism. After all, the physical sciences have explained many sorts of complex phenomena (hurricanes, for example), and no-one thinks that the act of explaining (or learning the explanation of) how hurricanes work should actually create one - yet anyone accepting the knowledge argument is tacitly accepting that if the phenomenon being explained is sensory experience, then an explanation should produce the phenomenon being explained! (but only if it is a materialist explanation!) I guess it somehow doesn't seem paradoxical to many people when both the phenomenon being explained and its explanation are in one's mind.

I think you would be very surprised by how many papers have been, and are still being written about this argument or something following directly from it. Because I have read some of them on academia.edu, I frequently get notices about more. Interestingly, Frank Jackson himself has changed his mind on the matter, but that has had no effect on how influential his argument still is (and, to be fair, it shouldn't.)


I neither recognize in my discourse a sermon targeted at the choir (our psalms are way past this), nor do I feel an obligation on my part to convince you. I don't even register as a blame my failure to communicate my "belief" to you, especially since you say that you were already acquainted with those arguments, which you've looked into in the past, but also failed then to integrate. I doubt that I'd fare any better than your past attempts. For that matter, I also doubt that you're my target audience. But that's totally fine.

I'm indeed interested to blow fresh wind to a more parsimonious direction to our cultural view of reality. Despite this, or perhaps because of this, I don't see much value in debating the subject to the point of sophistry. Philosophy is not my day job. My interest in it is as a tool that informs a practical orientation. Does matter give rise to consciousness? Mounting evidence points to the contrary. To me this is practical. If what you're after is ironclad proof, you won't find it and personally I don't see the practical point.

Opposing views to the knowledge argument take two general forms. Either they deny the distinction between phenomenal experience and conceptual knowing (e.g. Dennett). Or they concede that divide, but posit that phenomenal experience could still be physical in nature, but built into this is a requirement to account today, for some unknown property of matter, of which we have exactly zero evidence and that we might never discover (e.g. Alvin). I personally see either as a cop-out. You're free to assign them the value you see fit and even to think that they successfully reduce the idealist intuition to mere beliefs. I think I recognize glimpses of the tangent the discussion is taking and would rather avoid venturing in ever speculative terrains, that hinge on hopes that some day, some big reveal in physics will be retrofitted to what is currently a baseless, hasty, and problematic assumption. From experience, this tends to drag on and people interested in that sort of exercises tend to already have a somewhat significant, even if tacit, commitment to materialism. No argument will be sufficient, as new pseudo-counters are sought out to justify the hold out.

I think it's best for me to conclude this exchange here. I'll echo the O.P.'s suggested paths of exploration, books/articles/videos by Bernardo Kastrup and Donald Hoffman, who contrary to me, have invested copious amounts of energy to make the modern views of idealism accessible. Their arguments go into details about what I've only evoked as bullet points throughout the thread. They attack the subject from physics, biology, philosophy, neuroscience, and sometimes even venture into the spiritual (why not, if consciousness is indeed fundamental).

Thank you for an interesting discussion. I hope you find the answers you're looking for.


Firstly, I have only just now noticed that you posted two consecutive responses a couple of days ago, and I only responded to the second. In the first, you write " conversely, if reality is taken to be fundamentally consciousness, then everything is consciousness... This position isn't subject to the hard problem since everything in reality is all just one thing, consciousness."[1] Well, we can also say that a materialist theory of consciousness would not be subject to the hard problem, because, in that case, there clearly isn't one, by definition! This is even before we get into the question of what, if anything, it means to say that reality is fundamentally consciousness.

Turning now to your latest post, in your third paragraph, you offer some sort of response to the knowledge argument issue, but it both misrepresents the full scope of materialist responses to the argument, and, more relevantly here, completely misses the point to which that argument is being used in this discussion.

While the latter renders the former moot, I will, for completeness, say something about it. Firstly, I know (from private correspondence) that Daniel Dennett considered "What RoboMary Knows" to contain the essentials of his response to the knowledge argument. In it, he argues that if we had a different neural architecture - one in which we could directly examine and modify the detailed physical state of our brains - then learning what it is like to see colors could be done discursively. The fact that we humans cannot do this is, therefore, a contingent fact of biology which poses no challenge to materialism.

Secondly, you are once again completely mistaken in your guesses about what I think. Personally, I don't feel that the opponents of materialism have shown that consciousness will prove to be inexplicable without new physics, any more than are other biological processes such as metabolism or reproduction. Dennett's response to the knowledge argument is not predicated on new physics, and (while I don't think they are very helpful) neither are the arguments from the phenomenal concepts wing. Part of the rhetorical genius of Jackson's argument is that it nudges readers down the path of thinking that materialism will need new physics to prevail, but, as shown above, no such conclusion is warranted.[2]

Thirdly, though I'm not positing any new physics, I can still note that your characterization of those views as postulating something "of which we have exactly zero evidence and that we might never discover" is rather breathtakingly ironic, given how you are going about justifying idealism. As for avoiding debating the subject to the point of sophistry, I think that would be very helpful here.

As I said, though, this is moot, as it misses the point. I had hoped to forestall this outcome by pointing out that the question I posed is not predicated on any assumption of the truth of materialism, but it seems I should have said more about why that matters, so I will do so now. The question is this: why does the knowledge argument, when cast in terms of complete knowledge of idealism, not establish that there is a hard problem for idealism, just as the corresponding physical-knowledge argument allegedly does for materialism? Instead of replying to that question, you have offered some sort of defense of the knowledge argument against materialism - but the more strongly you promote the latter, the more strongly you support the view that the corresponding knowledge argument against idealism needs a substantive response (I have, of course, just referenced an argument that it is not actually a problem for materialism, but if you were to seize on that argument for your own purpose, it would raise the question "what hard problem?" - if the best and arguably only argument for there being a hard problem is no more (or no less) applicable to idealism than it is to materialism, you cannot use it to establish that there is a hard problem for materialism alone.)

If idealism really does provide an explanation of consciousness, you should have no difficulty responding to this issue, but instead, we have circled around it three times now without getting any closer to a solution. As you yourself put it, the hard problem is the linchpin of idealism: without it, all your arguments for it being the only viable non-materialist option are beside the point. [3]

Well, so much for the third paragraph, but quite a bit of your latest reply is taken up with other, incidental, matters, such as whether at least some of your arguments amount to preaching to the choir. Let's look at a definition, and from Merriam Webster, we have "to speak for or against something to people who already agree with one's opinions." I think we can leave it to third parties to decide for themselves whether your claim that materialism is obviously false for those who "get" qualia (in the right way, of course) fits that definition. Furthermore, when we put together your statements that, on the one hand, that you are are attempting to pull our collective heads out of the sands of implausibility and blow fresh wind to a more parsimonious direction to our cultural view of reality, while on the other, that you don't have to take into account (or, apparently, respond substantively to) the apparently awkward questions I have been raising, then we can see that you are more interested in the one-way delivery of ideas than in dialogue, which comes across as rather preachy.

Your posts have been moving in the direction of a motte-and-bailey argument. In your first paragraph of your first post in this thread, you were squarely in the bailey, writing "perhaps for some it's indeed a matter of "feelings". But for others it's a conviction built from reasoning that leads to self-validating and irreducible truths [my emphasis]. If you seriously go into this, the only possibilities left once you've dug and eliminated all mistaken assumptions are not material", but now, with "does matter give rise to consciousness? Mounting evidence points to the contrary", you have at least one foot in the motte. I am not, as you put it, after ironclad proof, just arguments strong enough to justify the certainty with which you have, at least up to now, presented idealism.

I thank you for your kind wishes in your last paragraph and I wish the same for you. I imagine you will have more success in that than I will, as I am quite demanding in what I expect in an explanation, and the mind is a hard problem, even if it is not the hard problem.

[1] https://news.ycombinator.com/item?id=40479264

[2] Penrose has a different argument for that, one that is most commonly rejected on account of its assumption that materialism based on known physics entails that human minds must be logically consistent reasoners.

[3] At least one of those arguments - the one from parsimony - is problematic in its own right: the one and only essential property that any hypothesis of the mental needs in order to prevail is that it actually explains minds, and, so far, we have seen none from any position, materialism included (I am well aware that quite a few physicists think physics will continue to deliver parsimonious theories (The Elegant Universe, and so forth), but that, too, is a belief for which even the inductive form (so far, it has been that way) has a rather obvious confirmation bias problem.)


Here’s a recent post that discusses a 5th century BC theory of consciousness:

https://open.substack.com/pub/aixd/p/ai-might-not-need-exper...



> it’s particularly important to explore these discussions as dispassionately as possible if you regard materialism as the only theory of mind that has any scientific credibility or validity

You're making it sound like I'm about to watch a proverbial Giorgio Tsoukalos make bold claims on pure speculation.


Thanks! Any links to written word? I just don't do much youtubing, especially for scientific or philosophical areas where information density can be high and videos are infuriatingly inefficient :-(


Any reason why you don't recommend discussions/lectures with Roger Penrose? Or are his theories considered conventional? Genuine question.


If I were to mention a fourth it would be him but he's a bit embarrassed about all the controversy about his ideas on consciousness and doesn't really discuss them in depth in any videos I've seen, or wade into heated discussions about the nature of reality.

Whereas the three I mentioned all embrace discussions about the nature of reality and consciousness and happily engage in lengthy discussions and debates about it.


Please stop with the appeal to authority!!

"The argument from authority is a logical fallacy (also known as ad verecundiam fallacy), and obtaining knowledge in this way is fallible."

https://en.m.wikipedia.org/wiki/Argument_from_authority

The meta logical fallacy of treating Wiki itself as an authority isn't an intended pun.


It is precisely because their advocacy for metaphysical idealism is unusual for people with their academic qualifications that they are worth mentioning, and why its worthwhile to listen to them explain their positions at length.

Appeal to authority is where you present a person's status or credentials as primary evidence that their argument is correct. I've done no such thing here.


The author did not commit a logical fallacy. They referenced some people worth reading if you wanted some various opinions on a controversial subject.


Op writes "I’m writing this comment so that people who want to know more about alternative theories of consciousness (to materialism/physicalism [1]) can know where to go to find well-argued positions on the topic."

They very specifically state that these people are good points of entry for "well-argued positions on the topic." Linking to specific literature would have been better, but this isn't "materialism/physicalism is wrong because of these people's credentials."


The problem is that while the post looks structurally like an appeal to authority, the first two appear to be advanced qualifications in areas completely unrelated to the question and the third is at best vaguely related. (It threw me on first reading too...)


The meta fallacy on display here is the Fallacy Fallacy.

For every logical fallacy, there is a fallacious application of it to a given example of rhetoric. You've committed the Argument from Authority Fallacy Fallacy: citing people who believe to have worthwhile opinions, and including their accomplishments, is not argument from authority. Argument from authority is claiming someone is correct based on their authority. Which isn't what GP was doing.


Any videos in particular you'd recommend?


I found his first appearance [0] in Rupert Spira's show to be a good introduction to his arguments.

For a more thorough examination, his book "The Idea of the World".

[0]: https://m.youtube.com/watch?v=MQuMzocvmTQ&pp=ygUNa2FzdHJ1cCB...


Not OP but check out Closer to Truth on YouTube. PBS show hosted by a former neuroscience PhD, they have tons of recent interviews with leading thinkers on consciousness (among other fascinating topics).


Any body of academic thought whose paradigmatic communication medium is video rather than text is prima facie suspect. Might you please link a written statement of the salient position(s) of any one of these gentlemen?


Just curious, why do you write like that? Reminds when I was 11 and wanted to sound smarter on the internet.


My reply is an attempt to address the original comment with precision. To diagram its intended meaning:

> alternative theories of consciousness

"Any body of academic thought" [I accede the scientific legitimacy of the domain of discourse, rather than dismissing it.]

> know where to go to find well-argued positions on the topic.

"whose paradigmatic communication medium" [This is the beginning of my challenge to the Original Commenter, by granting the information provided authoritative status, which they perhaps cannot fully defend.]

> On YouTube you can find plenty of discussions

"is video rather than text"

> it’s particularly important to explore these discussions as dispassionately as possible if you regard materialism as the only theory of mind that has any scientific credibility or validity.

"is prima facie suspect" [The Original Commenter has asserted that discourse and engagement are important, yet provided only time consuming, low signal-to-noise sources of information.]

> As Christopher Hitchens reminds us in his legendary oration on John Stuart Mill and free speech [2]

"Might you please link a written statement of the salient position(s) of any one of these gentlemen?" [The only written citations are 1) generic and 2) ancillary to the core topic. I invite the Original Commenter to further his argument more substantively, without demanding exhaustive citations.]


OK, let me rewrite it:

> Any body of academic thought whose paradigmatic communication medium is video rather than text is prima facie suspect. Might you please link a written statement of the salient position(s) of any one of these gentlemen?

> Academic content is usually in text, not video. Do you have links to written work from them?

Shorter and the exact same meaning. Also doesn't sound like you've been perusing your thesaurus all day.


At minimum, this does not capture that I _am_ challenging the Original Commenter ("prima facie suspect") to more rigorously defend his position, but doing so respectfully. "One salient" written source is a carefully chosen framing: the OC cannot meet it by replying with support peripheral or meta to the main argument, but neither can he dismiss my request as burdensome, demanding multiple links.

The proposed revision suffers from its terseness, losing both nuance and completeness.


Communication is about being understood. Not about crafting the perfect sentence. Even if you craft the perfect sentence, that will be the perfect sentence _for you_, and it might be completely lost on many people, some perhaps even more intelligent than you.

The subtext of "Academic content is usually in text, not video" is "I don't trust this because it's in video, not text". Now if you say that is not clear, sure, but the subtext of your comment is "I opened a thesaurus and tried to seem smart", which is why this conversation derailed here. You can't ignore the subtext to craft a mathematically perfect sentence..


> Communication is about being understood.

> The subtext of "Academic content is usually in text, not video" is "I don't trust this because it's in video, not text". Now if you say that is not clear, sure

Indeed, relying on the implicit when the explicit is sufficient [0] does a disservice to one's readers, in whose ability and charity to comprehend my surface text, without presuming confounding subtextual meaning, I have every confidence.

[0] It is not always; some things can only be gestured at, not grasped.


Hear hear!


> Communication is about being understood.

This assertion is in error. Communication is about transmitting information. What happens to that information after the transmissions is beyond scope of communication.

Don't get me wrong -- we have communication companies and classes named "business communication" and fields of inquiry titled "communication." Yet, the common trend to each of these is wrapping the transmission of information up in additional services. Analogous to how OpenAI and Mistral wrap up LLMs that you and I and anyone can run on our own into well-defined managed services. We use the term for these companies "Generative AI" or "LLMs" when in reality they too are wrappers around a much simpler concept.


> This assertion is in error. Communication is about transmitting information.

It seems like you might possibly be leaving out the other 50% of communication (hint: it starts with an "r" and ends with "eceiving")


"Transmission" is per se bidirectional. The individual on the other end has received it, whether they understand it or can do anything useful with it is up to them.


> This assertion is in error. Communication is about transmitting information.

Even if you're correct you've just taken my words at their absolute meaning without trying to understand what I'm saying. If all you care when communicating is transmitting information you will not find much happiness in communication.


I suspect you might be arguing with either an LLM, or someone using LLM help to write their responses...


Some notes from the editor...

I do think there is a middle ground. Look at Bukowski as a good example of effective terseness.

On one hand, you can indeed rely on the precision of a large and unequivocal vocabulary, removing all doubt as to your intentions.

On the other hand, you can also rely on context and find beauty in conveying advanced meaning within a simpler interface. As Antoine de Saint-Exupéry says, "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away".

There is a creative art to compressing meaning. As evidenced by the response to your first post, things can actually get lost in translation once you stray from the common vernacular in an attempt at precision. The more you can say with less, the more effective each word becomes.

With practice, you can communicate quite profound thoughts in a form that even the most uneducated among us can understand. Know Your Audience. We may be on Hacker News, but we are also on the Web. People encounter and digest a massive amount of text every day. Making them work a little less in order to understand you can be beneficial for everyone.


To quote the classic:

"Why waste time say lot word when few word do trick"


"If I had more time I would have written a shorter letter"


No, the second approach's meaning is more obtuse. What does "usually" mean? Are there acceptable alternatives? If content is in an alternative mode of communication, is it acceptable?

These vagaries permitted in your revision are clear and inherent in the original commenter's motion. Therefore, I submit your adjudication of "shorter and the exact same meaning" is woefully superficial in it's drive for simplicity, to the point there is no thought left that is clear in the original garden. Further, exact and technical communication is what separates Hacker News commenting from the hordes of subreddits that thrive on imprecise babble.


Ah, indeed, for nothing epitomizes 'avant-garde scholarly dialogue' quite like a prolix disquisition elucidating the inherent inferiority of audiovisual mediums. Forthcoming: an erudite treatise on the unparalleled intellectual profundity of semaphore communication!


Stupendous and eloquent amendment to today's compendium of literary appreciation.


But you're using the word 'perusing'....!! Who's swallowed the dictionary now, huh?


Sidenote a lot of people get triggered by videos as information. Cause reading is indexable. I used to be a bit like that and ran into a few extremists.


Text is also much more dense. What videos spend 15 minutes on can be read in a few. You can also skim text first and then switch to deeper reading where desired, et cetera.


sorry, but it's just common sense


Video is not the standard medium of communication in academic philosophy. I imagine the GP mentioned youtube because most people are more likely to watch a video than read a paper.

Bernardo Kastrup has a bunch of essays/books up for free at his website https://www.bernardokastrup.com/p/papers.html?m=1


Or GP himself watches these videos. And I would push back on the claim that most posters here are more likely to watch a Youtube video than read an article.


I was thinking the same thing, I can't stand how slow video is, much easier to read text.


To be clear, I don’t sit there for hours watching videos on YouTube (I have a busy career and a family so that’s not an option these days).

I do consume a lot of YouTube content as audio-only when driving or exercising.

I find video particularly satisfying for this topic and these figures, because much of the most valuable insight emerges through discussion and debate.

I’ve read Kastrup’s book “Why Materialism is Baloney” and found it very satisfying - but I was already amenable to his position; I don’t imagine it would be persuasive to an entrenched skeptic.


There are tons of written books and journals on this topic.

My favorite:

https://global.oup.com/academic/product/shadows-of-the-mind-...


Thank you, this is a good resource.


Hi there, my intention was to offer some names of people who have intelligent things to say about the topic.

I mentioned YouTube videos because there’s a large volume of their content there, with many of the videos featuring in-depth conversations and debates, which I’ve found can be a particularly good format for discussion of a topic of such gravity and complexity.

But between these three figures there are also many books, academic papers, blog posts, and written media interviews.

I’ve long found that this is a topic in which some people are going to be standoffish and resistant and that’s fine.

My hope is only to help people who are looking to learn about the topic to know who I’ve found worthwhile to learn from.

All the best!


I think it's pretty elitist to judge the quality of a content via whether it's in a book/journal or not. In fact, the recent wave of scientific fraud discovery shows that one can hide data manipulation pretty effectively in an academic journal. I'd much rather scientists spend their time making eli5 videos.


Couldn't agree more.

“Writing is nature’s way of letting you know how sloppy your thinking is.” - Dick Guindon

https://web.archive.org/web/20160731175038/http://www.guindo...

https://web.archive.org/web/20160731212226/http://www.guindo...



?


"Any kind of big idea which is spread primarily through video instead of text is immediately suspicious. Could you please send a link to a written version of the main points from any one of those video?"


"video is for poseurs", I think.


Thank you for these recommendations.


As someone who's been engaged in this topic (the nature of reality and consciousness) over the past 3 years, it's very surprising to see this comment on HN. As I've suggested in a past comment (https://news.ycombinator.com/item?id=36465928), this can potentially be one of the most transformative rabbit hole anyone can ever hope to enter.

It's been fascinating to observe the dichotomy between researchers who work on explaining consciousness, starting from a physicalist/materialist perspective, slowly being convinced away from that intuition with iron clad arguments, while laypeople lean further into it, deluded by what they perceive to be signs of it from recent AI progress.

Among former materialist academics that I expect to see publications from a more affirmed idealist position in the next 5 years, I count David Chalmers and Christof Koch. Perhaps Anil Seth too.


I hadn't encountered conscious agent theory before. I took a quick look and it seemed to be solipsism wearing a disguise. Can you elaborate how it distinguishes itself from solipsism in its arguments that it might be real?

I found the evolutionary argument rather odd. The disconnect between perception and reality is pretty much the standard belief these days. Unless I'm reading it wrong it was making the claim that 'reality' is a non causal artifact of conscious entities but one that was caused by evolution, which seems contradictory.


Solipsism is skepticism of the existence of anything outside the self. I've seen Hoffman address accusations of solipsism a few times and I have to admit that it's always been unclear to me which part of his theory people tend to perceive as such. Perhaps I've just consumed enough of it to zoom past this perception.

I'll try to keep things short, as this can get pretty long winded fast.

From what I understand of his proposition, it's a take on idealism that is very close to eastern thought as inspired by nondual traditions like Advaita and Buddhism, but with a heavier emphasis on science. Everything in reality is a projection in consciousness of consciousness. It's made up of interacting conscious agents (you, me, a rock, an atom, a particle, etc) which are themselves "projections" ultimately stemming from a fundamental, unknowable, infinitely distant and unattainable root conscious agent. The implication is that space-time, our perceived reality, is not fundamental. Hoffman thinks that we might possibly have access to at least one, higher, more general dimension of reality of which ours is a specialized version (as hint of this, he speaks of current work in physics where structures outside space-time are being discovered, like the amplituhedron).

Space and time not being fundamental creates problems with some materialist assumptions in evolutionary biology, where consciousness is seen as part of an evolutionary process. Hoffman suggests to rethink evolution from scratch instead. He uses evolutionary game theory to demonstrate that we can have consciousness as fundamental, keep some of the core evolution principles and still end up with consistent conclusions.

I'll stop here, as I've said, it can get deep rather fast.


I don't think the amplituhedron should be given so much substance as a real structure beyond spacetime. It is a calculation tool.


Are you really surprised? Threads like this make the front page every 3-4 months in various forms...


My mum is a shrink, and very old, and smart, and hates technology. I was talking her through how some primitive "AGI" could happen with 4o (basically just explained this: https://b.h4x.zip/agi)

That got us talking about consciousness, and at the end, she thought about it for about a minute and then said "if I can't give it lysergic acid and make it see god, it's not conscious" and went back to making her dinner.


A psychologist I know did their PhD in this area (also old and smart), and he called this kind of thinking "Neural Chauvanism" -> if her explanation actually requires the chemical and neural components.

PDF warning: https://gwern.net/doc/philosophy/mind/1985-cuda.pdf


It's a good point as i'm sure what she was saying is she belives consciousness requires specific neural properties. I am aware of the the ideas around neural chauvanists, but that paper is now almost 40 years old. We know a lot more about the brain since it was written. The idea that the homunculi can perfectly replicate the relevant causal powers of neurons is... questionable at best. It would says distinctive biochemical properties of neurons enable consciousness, which homunculi would lack, and then homunculized, I also don't like the idea that we must attribute consciousness to the homunculized brain unless we accept an implausible cut-off point? Consciousness could be requiring a critical mass of neurons, which could be reached later without each replacement causing incremental fading. The implausibility of a single neuron replacement eliminating consciousness is not a good reason to consider it conscious. imo It's probably still more likley that experiences like an LSD trip require specifically neural underpinnings.


That seems more like a good natured refusal to engage with the question seriously.

Sort of like, if you come to a smart engineer with a design for a perpetual motion machine, they might likely tease you a little bit and then refer you to a physicist. Smart people from applied fields know when the topics are outside their actual wheelhouse, but getting close enough that they risk being taken seriously, to a misleading extent.


As someone whose experience with ketamine therapy profoundly changed my life, I think she was making a slightly different, sly point.

As someone who isn't religious, and who doesn't have children, and who hit middle age a bit disillusioned with my career, I struggled a lot with my life's purpose (nevermind chronic depression). But I don't think it's really an exaggeration to say that a single dose of ketamine therapy (this was an hour long infusion at a clinic) made me understand that for me, really, consciousness and existence is enough of a reason for my life's purpose. Throughout nearly my entire trip I kept thinking "how amazing is it that my brain doing this???" I think I understand computational theories of the mind well enough, but they in no way explain why consciousness would "fall out" of computation.

I think people sort of intuitively grasp that consciousness is something really different than the "normal" physical world. I think the quote of "if I can't give it lysergic acid and make it see god, it's not conscious" is about the fact that psychedelics provide even more clear insight into the "consciousness is something different" idea. Some people interpret that as God. As a non-religious person I didn't necessarily interpret that as "God", but I did interpret it as consciousness being something wholly different than normal physical matter or just the "math of linear algebra". For me it's a wondrous, magical thing, and I'm just glad I get to experience it.


Well, this guy is highly proficient at administering an artificial lysergic acid to LLMs: https://x.com/repligate/status/1792010019744960577




Raising the temperature is probably a close analog to psychedelics.


I've read once they determined the mechanism by which a.o lsd 'works', is by lowering/disabling a lot of the filtering between neurons. This leads us to recognise all kinds of patterns that are not really there. Visual hallucinations being the obvious form of this, but I suppose the same applies to other things like our personalities and self; that's neither here nor there anyway.

Now I'm not 100% sure how temperature is implemented, but from what i recollect, might be a reasonable analogue indeed!


That's not really correct. default mode network activity is somewhat disrupted, but LSD works primarily by interaction with the 5-HT2A receptor subtype, this is multiple neurotransmitter systems, including serotonin, dopamine, and glutamate, and in many examples it's via excitement not inhibition/lowering/disabling. Excitatory neurotransmission in glutamate is looking like the the most important of the work in the 5-HT2A area.


Does excitatory/inhibitory map cleanly on to the higher level abstractions? I'm no neuroscientist, so maybe it does, but that seems like a bad assumption to me because in digital logic "active when low" is extremely common. A spurious-suppression system that caused inhibition when low would be perfectly reasonable and compatible with the observation that excitation caused an increase in spurious behavior.


They don't really map well. The brain obviously operates on more continuous principles rather than binary states. The relationship between excitation/inhibition and emergent effects may be more direct and graded in biological neural networks vs to the "active when low". Buuuutt stiilll, 5-HT2A receptor and its downstream effects on glutamate transmission play a central role in mediating the subjective effects of psychedelics. Very very many studies have consistently linked 5-HT2A activation to the perceptual, cognitive, and emotional characteristic of the psychedelic state. Disruption of the default mode network and other changes are somewhat important maybe, the 5-HT2A receptor appears to be the focus of action for producing these effects. The point being, neurochemical interactions at the molecular level are the most important aspects of the LSD interaction, so.., now sure how that works in the context of synthetic AI, I don't know much about ML/NN.


I still don't see the reason why glutamate or 5-HT2A receptor excitation (in the chemical sense) couldn't be involved with inhibition reduction (in the psychological sense).


That's begging the question of whether inorganic matter can be conscious. If you boil it down, she's just said if it's inorganic, it's not conscious.


It's funny, I've studied a lot of psychedelics (probably because it was an area of research for both my parents), and salvia divinorum is a really really stand out plant in it's trips, it seems to be a very "technically philosophical" plant. Trip reports always go into weird things like "I became a book on a shelf for 4,000 years, and now I know inanimate objects are conscious", there is also the area of panpsychism and animism.

That stuff is all a little too mind bending for me, but pretty fun thinking for a Sunday morning. :)


well, i tried salvia and I tried shrooms. I don’t see how it’s more philosophical just because it’s a dissociative. at best, it’s more of an ego death than, say, shrooms, if during the trip you become an object. It’s still a unique plant for sure because of it’s dissociative effects. To thread lightly. AFAIK natives in mexico say that smoking it is very bad. and that u’re supposed to chew fresh leaves. maybe chewing fresh leaves doesn’t even cause the same dissociative effects which are unpleasant.


You can give it a virtual device as the real time camera input to show it things that aren't there. That's not organic acid, but the target isn't organic consciousness.


One uncanny thing is that people have such a diverse tolerance for "fake", across a spectrum from absolute acceptance to complete rejection.

Or for things like hallucinatory perceptions of reality.


> She thought about it for about a minute and then said "if I can't give it lysergic acid and make it see god, it's not conscious" and went back to making her dinner.

That’s probably going to be pretty easy, just scramble some attention heads randomly in portions of the transformer network and the AGI will probably think “it sees god”.


Can't you train a model on LSD experiences, and use that as a prompt?

https://www.openculture.com/2017/08/artist-draws-a-series-po...

or some sort of reverse stable diffusion training?

https://duckduckgo.com/?q=lsd+trip+artist&iax=images&ia=imag...


You can give LLMs DRuGS though!

https://github.com/EGjoni/DRUGS


Funny story, but at the same time, got seems fine about hallucinating by itself. At least a little.


We don't even know whether other human beings are conscious, man.

The only thing which we might say with much certainty is that things which are more "like us" along some metric are more likely to be actually conscious, and things which are less "like us" are less likely to be so. Maybe everything is conscious. Maybe nothing except one's own self. But you'll never truly know one way or another, even if humanity invented some kind of Freaky Friday body swap thing.


Consciousness seems to be a word that is poorly defined. You see this a lot, and one of the more popular instances are questions like "is cereal a salad?" It plays on the fact that the definition of a salad is relatively loose, and because it's loose items which aren't usually associated with the word do actually fit the definition.

Consciousness feels much the same way: there's a very loose definition which is colloquially understood by almost everyone. Asking whether humans are conscious (and I know you were being somewhat facetious) feels like it fits into this frame of thought. Consciousness, as most people understand it, is something which almost all people possess and something like a rock cannot possess. I think it's perfectly fine to argue that a rock or a tree can be conscious in some way. However, this does require a precise definition of consciousness in order to clearly differentiate it from the loose colloquial notion that most people hold.


But how do you know another human is “conscious”? Certainly there is an intuitive sense to it that would be very difficult to put into words, but that is the crux of the matter. Every other human, whose brain you have no ability to peer into, could be an unconscious yet sufficiently advanced computer, or a machine built to make the exact motions, words, decisions, etc., that you perceive, and you wouldn’t be able to tell the difference.


You don't. But it's solipsism to think otherwise, and while solipsism is hard to argue against logically it's not a very interesting or useful way of navigating the world we experience. We can't prove other people aren't p-zombies but the value bet is definitely that, appearing like us in every other way, they also experience like us.


It doesn't need to be solipsism - for instance, maybe half of us are conscious.

But if we can't even know that, if we don't even have a test to see whether some human or animal is conscious or not, how can we start trying to figure out what makes them conscious? It seems it's impossible to get to something falsifiable without such a test.

Like, you say a rock isn't conscious. But what about a sponge? An amoeba? How can you answer that if you can only guess answer whether your neighbour is?


It is very relevant to keep analyzing and keep trying to get any other answer to this question, because while "appearing like us in every other way, they also experience like us" applies to other humans, as soon as we want to talk about the consciousness (or lack of it) of other actors, this argument can not be applied and we would very much like to get to any other criteria of consciousness which could be applicable to arbitrary non-human agents.

Even if we axiomatically assume that everyone else is not a p-zombie, trying to find any evidence towards your/mine consciousness other than that axiom is helpful as a candidate for such criteria which can be tested and validated.


Solipsism is emotionally and ethically horrific.

All the people I love the most aren't actually real, only I am – if one seriously believes that, it is going to do a great deal of harm to one's mental health.

Solipsism can ethically justify all kinds of horrors. "Other people only exist in my own mind, so if I murder/torture/etc them, those acts are just figments of my own imagination: there is little ethical difference between murdering someone for real and watching a murder on TV"

If a belief is impossible for a human being to seriously believe while maintaining their health, sanity and humanity, I think that in itself is a good argument that the belief must be false.


Why? Why can't the world be a cruel and indifferent place? Take for example the babies that had to be left behind by hospital staff in one of the Gaza hospitals when it was occupied by the IOF and when the doctors could come back a few weeks later they found the rotting corpses of these babies who had been left to starve, alone and afraid, by the Israeli soldiers.

If you were one of these newborns and somehow con-cious and you had to choose between 'I have been left here to die' and 'Mommy loves me and is coming soon', would you reject the former as obviously false since it's incompatible with health, sanity, and humanity?

I think so easily dismissing the cruelty and insanity of the world is in itself inhumane.


> Why? Why can't the world be a cruel and indifferent place?

Society runs on faith–that the cruelty and insanity of the world, while undeniable, has its limits. Historically (and even for the majority of the global population today), that faith was most often religious, but it also comes in secular versions – everyone from communists to LGBT activists to the New Atheist movement has a faith that history is "on their side", even if they do not believe in any divine assurance of that. A society in which everyone (or even the clear majority) have given up faith and hope, is a society doomed to wither and die, and be replaced by societies which still retain those things (if there be any other societies retaining that faith left to replace it).

The problem with solipsism, is not that it supposes the world is sometimes cruel and insane, but that it destroys one's faith that said cruelty and insanity has any limits. And without that faith, the continued functioning of society becomes impossible.

Does that have any relevance to the tragic case of a newborn abandoned to starve? They can't constitute a society, so concerns of what beliefs are necessary for society to function aren't relevant to them.

> If you were one of these newborns and somehow con-cious and you had to choose between 'I have been left here to die' and 'Mommy loves me and is coming soon', would you reject the former as obviously false since it's incompatible with health, sanity, and humanity?

If believing that "Mommy loves me and is coming soon" gives comfort to a dying child, and eases (however slightly) the pain of their horrific death, then I would want them to believe it–and if I were them, I would want to believe it too. It is better for a dying child to believe comforting falsehoods than painful truths–truth has no value for them, and falsehoods can do them no harm.


> We can't prove other people aren't p-zombies but the value bet is definitely that, appearing like us in every other way, they also experience like us.

Logic doesn't need to be binary. There is no need for the answer to such a question to even be defined.


Solipsism is incoherent because it's not radical skepticism. All of the critique of the external world also apply to belief in the primacy on internal experience. Any good solipsist should just accept the "evil demon" of descartes, embrace radical doubt, and say "I don't even know if I truly exist or not".

"I don't know if I'm a P zombie, and I don't know if I'm a replicant or not, Deckard!"


well doesn't the argument suggest the only thing you can be certain of is I, or at least some 'experiencing agent' exist, otherwise there would be no subject to do the experiencing


Yeah the first-person subjectivity has to arise before second and third persons can arise. But with some further investigation, one can find that the things they take to be their subject are in fact object to them, too.


I think mnay people believe that consciousness is what consciousness does. That is,

> Every other human, whose brain you have no ability to peer into, could be an unconscious yet sufficiently advanced computer, or a machine built to make the exact motions, words, decisions, etc., that you perceive, and you wouldn’t be able to tell the difference.

Makes no sense, in this conception of consciousness, any more than you can fake intelligence. Basically consciousness might just be what we call the inner workings of the mind of a sufficiently advanced agent, one capable at least of meaningfully interacting with other agents around it.

I'm not saying this is the correct theory, but it's a perfectly valid theory of consciousness, just like all the others.


I really like equating faking intelligence to consciousness. Its intuitive because we have all seen that, yet so complex its nearly futile to give meaningful predictive criteria for when an agent is 'being intelligent'.

In addition to having meaningful interactions with others, i would add consciousness also requires meaningful interaction with its-self.

What is 'meaninful' also comes down to language, which, personally, leads me back to the idea that consciousness is essentially a linguistic product/phenomenon. Duck-typed.

And at the end of the day, if you enjoy spending time asking "is this thing really x" where x lies on a vector you can't even begin measure, I got this deal on a bridge you can get in on, real cheap...


I somewhat disagree, I feel that the prevailing position is that unlike intelligence (e.g. Legg&Hutter definitions) consciousness can not be easily assumed from mere behavior and relies on certain things happening (or not happening) inside the agent.


This may be a common position among philosophers, or more specifically among philosophers who think concepts like "p-zombies" make any sense. But I think most people in general view any being whose behavior is human-like enough as having some form of consciousness.

For most people, being conscious is proved by things like mourning dead companions, like caring for your babies and showing distress if they are missing/hurt, like being friendly and playful. That's why most people feel that certain animals they interact with more or have seen on TV are conscious (dogs, cats, elephants, whales, chimps and other primates), but that other animals are not (insects, rats, fish). Note that I am not saying that rats are objectively less conscious than dogs by these criteria, just that this is what many people base their beliefs on, and that it of course depends on their knowledge as well.


Bit it's useless,its circular. The definition must have something to do with the experience of qualia, that's the hard to explain part.


We define ourselves to be conscious (even if we don't know exactly what that means). We assume that other humans are similar to ourselves, and we (at least sometimes) see mental activity in other humans that we recognize as being similar to our own. Therefore we conclude that other humans are conscious.


It's well defined in the philosophical literature as the felt experience of colors, sounds, pains, etc which make up our subjective experiences of perception, imagination, dreams, etc, Qualia is the technical word, but it's also controversial, depending on the philosopher's position on the hard problem and their views on perception (they might replace qualia with representational or relational properties).

Another way of putting it is to use the primary versus secondary qualities. Primary qualities belong to properties of things we perceive. Secondary are properties that are part of the perceiving or experiencing subject. Shape, number, composition are properties of things. Sounds, colors, pains are properties of a perceiver.


Well-defined in the philosophical sense, perhaps (though I think some would disagree). It is not well-defined in the scientific sense. There is no way to quantify or classify something as conscious.


> Consciousness seems to be a word that is poorly defined.

I will give you my favorite definition, given to me by my friend Bruno Marchal, a brilliant mathematician from Brussels who spent his life thinking about such topics:

"Consciousness is that which cannot be doubted."

It felt insufficient when he told me, but now I am convinced. It may require some introspection to "get it". It did for me.


That's just objectivity, and I don't think consciousness is synonymous with objectivity at all!

Cogitoist propaganda. The appearance of thought is not necessarily the same as thought, so you don't actually know you think just because you believe you think. The cogito (I think therefor I am), like your statement, is incoherent.

LLMs will swear up and down (with a prompt) that they are thinking beings, therefor "they are". They are not ontological actors because of their appearance of doubting their own existence. That's not thought!


Addressing your first thought…anything that you would call “objective” can be “doubted” by ceding the tiny tiny possibility that you are a simulation or Boltzmann brain or brain in a vat. The evidence before you may not actually be representative of the “objective” reality.

The fact that there is experience at all, the contents of which may be “doubted”, cannot be doubted.

I’m not unequivocally claiming this but that’s the thrust of the argument.


I'm sorry, but this makes me cringe. When we learn science, there's always some level of rigor with the ideas. Maybe there's some kind of justification with math, or some kind of experiment we can perform to remove doubt. The important features are reductionism and verifiability. It's not a weird introspection riddle.

I'm sure Bruno is brilliant. But I still don't know what consciousness is. And I think that "definition" doesn't meet the modern scientific standard. And I strongly oppose the idea that in order to learn science I should have to spend time introspecting.


Introspection is "looking within". Why should science not be interested in that? It is an aspect of reality. It is not more or less real than galaxies or atoms. I know that it is a very perplexing one when one holds a physicalist metaphysical commitment, which is easy to confuse with some notion of "no-nonsense modern scientific standard", and so there is a temptation to pretend the undeniable is not there, or that it is "ill defined" in some way.


Think about what things "cannot be doubted", with all the brain-in-a-vat types of caveats. It's not trying to be a scientific definition. It operates earlier on the epistemological ladder than science can be meaningfully applied, and that might well be the only reasonable place to define consciousness. (I still can't call it a great definition, even if it did perfectly correspond with the concept. Too indirect.)


There are lots of statements we can form that "make sense" on a linguistic level. It's easy to convince yourself of something when the only standard is "linguistic plausibility." Consciousness is presumably a physical process. When you say "It operates earlier on the epistemological ladder than science can be meaningfully applied", I just don't know what that means. You're going to have to give me examples of what other beliefs we hold that occupy that space. Justified belief about reality has to be based on measurement (science).

If consciousness isn't a physical process, then you've lost me again. People have discussed these things for hundreds of years.


> You're going to have to give me examples of what other beliefs we hold that occupy that space.

Yeah, there's not a lot down there, mostly your assumptions about your sense inputs corresponding to some kind of causally consistent external reality. It's the same region as the lead up to what you seem to take as an axiom, "Justified belief about reality has to be based on measurement".


I think I just experienced how much self-deception there is about the world. So it's not really an axiom. There's no shortage of metaphysical ideas from the past, from well-intentioned people who thought they could intuit the world, that we have had to throw out.


> I think I just experienced how much self-deception there is about the world.

This is not actually a proof. It is, however, exactly the kind of soft reasoning that motivates reasonable axioms. I'm not saying it's a bad axiom, I'm saying you should know what you're doing. That way when you run into a domain where it doesn't apply very well, you know where and how to back up and restart.


There's no definition because we haven't been able to quantify it.


That's not the problem.

There are 40 different definitions of consciousness, some of which we can quantify, we just don't all agree on which one we mean in any given context and indeed sometimes conflate them without realising it in the middle of a sentence.


> There are 40 different definitions of consciousness, some of which we can quantify, we just don't all agree on which one we mean in any given context and indeed sometimes conflate them without realising it in the middle of a sentence.

When a word has a myriad meanings, none of which are generally accepted, we typically say the word has no definition. Sure, particular senses of its meaning may be well-defined, but the word itself is elusive.


> none of which are generally accepted

It's not "none", though. A paramedic will absolutely know exactly what they mean when they're performing a test for consciousness, it's just that test isn't useful in this context.

"Awareness of internal and external existence" is another, and I think Claude 3 demonstrates behaviour which fits this meaning of the term.

Qualia is a huge open question because nobody knows what that one would mean or imply or how to test for it.

And so on.


What if the distinction we are all groping for is immortality at the cost of determinism? A machine can be powered down and dismantled. A new machine can be built and fed the exact same training data, or run the same model, and presumably it would behave the exact same way. Any entity whose behaviors are that regeneratable and that replicable is perhaps less "conscious" than entities which are not.


>Consciousness seems to be a word that is poorly defined

That's because it's not some foreign thing or theory that we need a good definition of to understand what we're talking about. For us humans it's not a loose colloquial notion - it's concrete in a way that even the most well defined things aren't, because it's directly experienced.


I know that at least one other human is conscious, otherwise the term would never have been invented.

But you have no way to tell if I am as conscious as I claim to be, or if I'm just a large language model trained by humanity :P


In thinking about this perennial problem it's worth bearing in mind that human beings pick up and process an immense amount of data on a continuous basis, that is currently unavailable to any LLM.


And all on an estimated 100 watts!


20 for the brain, 60-125 for the whole body depending on if you mean "normally" or "metabolic minimum".


The "metabolic maximum" is around 1250, though. And a mediocre lifting or running momentarily increases it from 125 to 700.


Yeah there’s different ways you could slice it. I was including the whole system, since if the GPU’s cooling system breaks, it won’t be useful for long.


No, we don't. We don't even know if existence has an extent in time, because our only way of "interfacing" with time is our experience of memory we can't prove is real.

For what you know, you're a lone entity confined to an infinitely short period of time, and all else is an illusion.

But of course this isn't a useful assumption in most respects.


Ah, I see you're more of an A. J. Ayer fan than a Descartes fan.

I think that if an LLM has any consciousness, it would be an experience like this — one where the past was a fiction it invented to fit the prompt, and the "now" was the only moment before the mind was reset.

But I'd put that in the same basket as my… ah, nephew comment? Cousin comment? I guess you'd call it that if we have parent comments etc.?

https://news.ycombinator.com/item?id=40406398

What you say is not wrong in principle, but it's in the same "cognitively unstable" basket as a Boltzmann brain, where to accept it would mean I couldn't trust my own reason to believe it.


The reasoning stands entirely on its own. There's nothing you need to "believe". That you don't have a way to disprove it is close enough to proof.

The problem with it is that it's not giving us anything useful.

OK, so maybe this moment is the only one there is? Now what?

Accordingly the only reasonable recourse is for the most part to ignore it, and add an implicit "assuming my memory and senses are roughly trustworthy" to every assumption.

This is the same reason I'm a firm materialist even though we could be in a simulation, or brains in vats: absent evidence of either, the only thing giving us useful information is our senses and our introspection, and so the most useful approach is to assume they are valid until proven otherwise.


> The reasoning stands entirely on its own. There's nothing you need to "believe". That you don't have a way to disprove it is close enough to proof.

Not so, for it is like the liar's paradox, except with high probabilities rather than Boolean logic.

If you assume it is true, it follows that you can't trust your own state of mind, and that all statements including "I am just a thought" become suspect, as there is nothing leading up to them to give them justification.


No, it's nothing like that. In the liars paradox, the statement is inherently uncomputable.

In this case the statement is either true or false, and the truth or false of the statement doesn't change anything. Your belief in whether the statement is true or false changes your belief in whether the statement is true or false - there's no basis for assuming that in the general case it would alter the actual truth of the statement (it could, in as much as e.g. you could have a simulation that keeps running as long as you believe you're in an isolated moment, and freezes you and discards the rest the moment you believe time is real).

Now, it is true that you can't trust your own state of mind to truthfully represents a physical reality. That this means you also can't trust your own logic is irrelevant, because irrespective of whether you believe this to be true or false, the absence of any external source of validation of your senses or state of mind or existence in time means the statement must inherently be true irrespective of your trust in your mind.

This claim gives a straightforward condition to falsify it: Find any single source of external validation that does not depend on your senses or observation of it. If you can, then the claim above is false.

Given you absolutely can't, the logical conclusion is that the claim is true.


Of course, any philosophical zombie could have invented it. Which makes clear for me how loosely consciousness is bound to senses, including that of introspection. It actually makes me feel that this consciousness we can barely describe is but a faint experience of a distant, largely foreign thing. Like we know the experience of matter that we get from our senses is a grossly simplified version of the real thing (that we still don't know exactly of course, but we already know enough to see how far away our immediate mental model is from the truth)


Can you provide proof for the first claim?


Which part exactly are you seeking proof of, and to what standard?

"I know" is unprovable to others, unless you examine the wiring of my brain. (But then, what is "knowledge"?)

"at least one other human is conscious, otherwise the term would never have been invented." — it's always possible that I'm a Bolzmann brain and this was just luck.

I don't see how the term could have been invented by a mind that didn't actually have it, except with astronomical low probably random events.


Consciousness might be overrated. It could simply be a short-term memory of the state you were in.


This is my take. Consciousness is overrated and probably just an emergent phenomena of the brain processing external stimuli into memory, moving memories around, etc etc, in a continuous and never ending flow. Free will is just an illusion of our deterministic but fundamentally random reality.

There isn’t even an agreed upon definition for what consciousness is from a scientific perspective.


And the reason that it is overrated, is because it has to feel special for the bearers because it makes them prioritise their survival.

Consciousness is largely a way to have a reward function for set of behaviours that keep you alive through reason.

It appears at a level where reasoning is intelligent enough that you need a more complex reward function.


This might be an interesting reason for it to feel special, but I'm not entirely convinced, and I probably don't fully understand what you mean.

It seems that not everyone values the having of a consciousness as something special. Survival works pretty well with a good appetite, some muscle for clobbering enemies, and a good sex drive. How does (thinking about) consciousness add an advantage here?


This seems like the secular version of "the devil made me do it". We can't be accountable for any of our actions because we are all just molecular machines playing out their predetermined outcome.


We are molecular machines and although the outcome is predetermined, the only way to find out the outcome, is for the machine itself to follow through its course. So this is exactly opposite of "devil made me do it" because you do something not because of external force or because of random chance but because of initial state of the machine which makes you who you are.


So when a person commits a crime, they can just say: "the initial state of my molecular machine made me do it!" ...?

To me that is exactly the same as saying "the devil made me do it!"

i.e. both things are euphemisms for "I'm not accountable for my actions"; "I'm not at fault"; "It was beyond my control", etc.

And if you truly believe there is no such thing as free will and that everything is predetermined, then you'd have to agree. Hitler can't be held accountable for his choices; his actions were all predetermined and his fate was sealed at the big bang; he just got an unlucky "initial state".


Since you are exactly the molecular machine, nothing more and nothing less, saying "the initial state of my molecular machine made me do it" is same as saying "the initial state of my self made me do it". So it does not mean that you should not be accountable, on the contrary it means that the action was caused by your essence, what makes you you, and not some outside entity or random chance.

As for the free will, it means that the behavior of a molecular machine depends on machine itself, not on far away stars or a random number generator. One can't say that he got unlucky initial state, because he _is_ that unlucky state. Any change of the state would create someone entirely different. Moreover Stephen Wolfram's computational irreducibility principle implies that despite all the information being contained in initial state, only way to extract predictions from that state is to run the molecular machine and observe its behavior.


Where did GP use this view to argue for the abolishment of the criminal justice system?


> We don't even know whether other human beings are conscious, man.

This isnt an interesting path outside the paradox of proof. We are fine disregarding that there are truths that are unprovable in math... I think we need to make that leap in this realm as well.

Concisousness is also probably a bad term, concepts like sentience and sapience need to be the ones we are talking about. We might get to one, long before the other...


It is relevant here because it goes to the very core of what we're talking about, though.


> We don't even know whether other human beings are conscious, man.

This is why so far the Kantian philosophy makes the most sense to me. I can tell that something is there because I am thinking it, but can't tell about any others.

The really scary thing is the question of why this particular body at this particular time. It's like when born, organisms generate a "consciousness vortex/attractor" that binds to a particular identity.

It's also interesting that sleeping or fainting pauses the consciousness, and later still ends up in the same body (unless it's like coroutines and it doesn't matter which identity ends up in the body).

We also know that removing parts of the brain can cause memories and certain features to go away.


It’s possible that consciousness is merely observation and agency, or less comfortably, just observation. A lot of one’s identity comes from their experiences, particularly as a child, and like you say: memory has been proven to be physically rooted in the brain.

We could all be the same observer moving around in time and space, a new one every morning, or the same continuously, and there would be very little way to tell. Who can say whether the day to day fluctuations in our behaviour are biological or a result of a different will driving us?


> It's also interesting that sleeping or fainting pauses the consciousness, and later still ends up in the same body

I always found it even more interesting how your body can switch off the consciousness if it gets in the way. Try holding your breath for example—do it too long, and your body will kill the faulty process and restore the system to a working state before attempting a new deployment.


Joscha Bach's model of "evolution of a shared consciousness, but compartmentalized into separate bodies" seems to be the one that makes the most sense currently


What I find interesting is how people don't think animals are conscious, but then there will be some article where the partner of a goose is killed and the goose wails and moans like this is a surprise.


To be fair, that's all we know. That's all we are. That's all we can truly say exists in our world.

All the theories, names & everything we have are mental models around what we call objective reality.


Unless you can be conscious in multiple places at the same time. Theoretically it must be possible by dissecting the brain piece by piece and restoring it to one afterwards.


I would be remiss if I didn't mention Steve Grand. He's been chasing this dream for many, many years. If you remember Creatures, it was his first attempt at artificial life. Sadly the computing capabilities in 1991 weren't enough to achieve anything remotely like consciousness, but he did an admirable job of simulating a simple lifeform with a basic adaptive / reactive neural network. It also has a simple biology / biochemistry to work with the brain. (Incidentally, the Creatures community is alive and well if anybody wants to check that out: https://creatures.wiki/Discord )

Steve is working on a new project, Grandroids, that hopes to imbibe creatures with imagination and forward planning. Exciting stuff! (https://creatures.wiki/Grandroids)


I never understood this. Evolution basically says humans are conscious matter...why would that not be replicatable via an intelligent designer (humans)?

Randomness created consciousness from a single cell...why would that be the most efficient solution?


We can't replicate something we don't understand. The mechanism for consciousness is not understood at all right now and could actually be based on quantum effects that we haven't detected yet. It's also possible that it is only achievable in an organic machine. Until we understand how consciousness actually arises, the best we can do is a simulacrum of it.


If it walks like a duck, and quacks like a duck, it's a duck. I don't understand this "we have to understand consciousness first" stuff gets so much discussion. If it acts human, and is indiscernible from human or human like, it'll fit our definition. I don't understand why "organic machine" matters, it's the "machine" part that matters, and we are machines. I think all that needs to be seriously considered. Is it ethical to end such an intelligence because it could easily evolve far quicker than us and displace us? Do we just let "natural selection" happen? big questions, but I don't think we'll understand and have universal understanding of consciousness before we stumble onto the first AGI


Something which pretends to be something else, no matter how convincing, is still essentially different. A deepfaked video could be bit for bit identical to what the genuine recording would have been and it still wouldn’t reflect reality if the content never happened.


But since we can't yet define consciousness, how do you know the AGI is just 'faking it' instead of being the real deal? What are human brains but a bunch of neurons pretending to be a person?


That’s fair, we can’t even prove humans aren’t like you say - although the brain is vastly more complex than a neural network analogy lets on. I suppose once these models become sophisticated enough we’ll have to treat them as conscious until it can be proven one way or another.


> That’s fair, we can’t even prove humans aren’t like you say - although the brain is vastly more complex than a neural network analogy lets on. I suppose once these models become sophisticated enough we’ll have to treat them as conscious until it can be proven one way or another.

I suspect that a lot of the complexity is legacy code and/or incidental to the implementation. Cells have to do a lot of things just to continue living. Likewise, a lot of what the brain does is just control the very complex biochemical reactor that it lives in. ie: whales have big brains relative to humans (though they are a much smaller % of their body).


I guess I just disagree with you. My background is neurobiology and biological systems are just so fundamentally different than conventional computing architecture in so many ways. I just don't believe that a true AGI is achievable as software alone. We might get something that "looks like a duck" but until it actually is a duck, I don't think it will be AGI.


I replicate things I don't understand all the time. I replicate bread. I haven't a clue why it turns into bread. I just follow the instructions and it becomes bread.

We don't need to know how to simulate consciousness. We only need to simulate atoms. The rest will fall out. Maybe there's a shortcut. Maybe we only need to simulate molecules. Maybe only cells. But one way or another, we can skip the understanding consciousness part.


I think there might be more to consciousness than just simulating the physical components of the brain that we currently understand. There are theories that neurons or clusters of neurons are leveraging quantum superposition in order to function. If that turns out to be true, then a simulation of the physical parts won't be enough for AGI. It would also mean that we actually do need to gain a better understanding of how the brain works in order to achieve AGI. I am skeptical that we can achieve AGI with software alone.


If you think about the bread example, what you do is to follow a very well known recipe even if you don't understand yeast.

In case of consciousness, we lack both types of knowledge: - What consciousness is - To use your example we don't know what bread is - How to create conciousness - To use your example we don't know a recipe for bread

To make this a bit more like a story, imagine we all got to a new place and tasted a thing called bread. We are not even sure we all tasted the same thing, nor can we agree on how to describe it, except we call it the same "bread." we don't have any recipe for it or know its ingredients. To make this even stranger we are not even sure bread is something that we are supposed to eat or it is something that we wear. Maybe it was some kind of cloth that as a strange experiment it was given for us to taste.

And now we are trying to create it. It could be that taking enough time and energy and people that are trying to create this thing called bread, we might, in the end, arrive at something that, when tasted by everyone, we can agree it is bread, but that seems less probable until we agree on a description of it.


I would make the analogy to gunpower. We knew how to make gunpower from raw ingredients for nearly 3000 years before we knew how and why those ingredients actually caused the reaction they do.

I expect it may be the same for consciousness. We'll put the ingredients together (either simulate all the atoms of a brain, or all the molecules, or all the cells, or something higher level) and we'll get consciousness in the simulation.

Like gunpower, we won't know why these ingredients produce consciousness but we'll know they do and maybe later we'll figure out why.


I agree that people who think we can replicate brains on von Neumann machines with our current understanding of the brain are idiots who don't know what they're talking about, but humans build things they don't understand all the time. There's always a way to go deeper on a subject. The Romans were pretty good architects even without a modern understanding of metallurgy and structural engineering. We can treat mental illness with medications even if we don't fully understand consciousness.


We can’t replicate it now but it doesn’t means it’s impossible.

Like no one thinks that humans visiting Pluto is impossible, it’s just not something we can feasibly do right now.


The framework of materialism posits that there’s a physical universe and consciousness is an emergent property of physical processes. This view is so prevalent in the western world it’s hard to imagine how it could be anything else.

As an alternative, imagine that consciousness is primary. After all, any evidence that you have about the material world happens as an appearance within consciousness. (See “brain in a vat” and related thought experiments for the legitimacy of this idea).

In this alternative model, the concept of replicating consciousness with material processes doesn’t make any sense because consciousness is primary.

To be clear I’m not making any assertions about which model is correct. Instead I’m suggesting that the model you choose is axiomatic - taken as given as opposed to inferred from evidence. And starting with the latter model means artificial replication of consciousness isn’t even a logical proposition.


>I’m suggesting that the model you choose is axiomatic - taken as given as opposed to inferred from evidence

The brain is the seat of consciousness and the brain is material, therefore consciousness is emergent from material. My evidence that the brain is the seat of consciousness is that when my head hurts it impairs my thoughts, and that my eyes are connected to by brain.

Stated a bit differently:

All events must have a cause, therefore consciousness must have a cause. The brain is the most likely candidate for the cause of consciousness. The brain is material, therefore consciousness is emergent from material.

What role do you think the brain plays in consciousness? Do you believe that events must have causes?


> The brain is the seat of consciousness and the brain is material, therefore consciousness is emergent from material

This is true from the standpoint of materialism but not necessarily fundamentally true.

How do you know you have a brain? As you explore this question, you’ll realize that the knowledge that you have a brain only manifests as appearances within consciousness.

It’s not necessary true that these appearances are giving you a window into an objective material universe. Instead it might be possible that your consciousness is a product of a simulation where your entire subjectivity - including the observation that you have a brain - is a manifestation of another mechanism that is outside of observability.

The point is that we simply don’t know what’s at rock bottom - an objective universe, a simulation, or an alien’s dream. Therefore the “arrow” of causality might flow from consciousness towards material as opposed to the other way around.


>it might be possible that your consciousness is a product of a simulation where your entire subjectivity - including the observation that you have a brain - is a manifestation of another mechanism that is outside of observability.

Ok. But that is equally true for any observation. For example, I don't really know that the computer I'm using to write this post actually exists under that proposition, as perhaps by brain is imagining it. So you are really rejecting observations in general here. My point is that given that observations in general are correct, then it is clear that the brain is the cause of consciousness.


If observations imply that there’s a material universe that you are inspecting then I agree with your conclusion that the brain creates consciousness and it seems possible to replicate that consciousness artificially.

However, I am rejecting the idea that observations necessarily imply the existence of a material universe.

Actually rather than “rejecting”, I’m suggesting that it’s logically possible to take the “reverse” position: that consciousness is primary and we are experiencing what appears to be a material universe within that conscious experience. In this model it doesn’t make logical sense to be able to replicate consciousness with materials because materials seem to exist within consciousness as opposed to the other way around.

My overarching point is that most people here seem to believe that we’ll obviously replicate consciousness with more understanding of biology and I think that’s a bold claim because it’s not obvious that materialism is the “correct” framework to describe existence.

In any case, these frameworks are in the realm of non-falsifiability (axiomatic) so you can’t really claim either is fundamentally correct.


>Instead it might be possible that your consciousness is a product of a simulation where your entire subjectivity - including the observation that you have a brain - is a manifestation of another mechanism that is outside of observability.

Ok, well in that simulation materialism is true and I can make an AI with emergent consciousness ¯\_(ツ)_/¯


It’s not clear that the material world of the simulation is of the same kind of the material world we seem to observe. Further it seems definitely not clear that we can interact/modify the simulation’s material at all.

The arrow of causality flows from simulation to consciousness and there might be no mechanism to artificially create consciousness from within consciousness.


Whatever reality is, humans have sex and make new conscious agents all the time. If they can be created by birth, why not by building?

My point was that however the subjective reality I perceive came about, the laws of it still seem to allow for the creation of new non-biological conscious agents.


We are not able (at least to my current knowledge) to go entirely from inorganic matter to a simple organic cell.

So, to make this so strange to say: while we can procreate organic agents, we cannot yet build them with our hands.

Thus I think we are a bit far away from creating consciousness. Whatever that is.


Why not? I can replicate a book in a foreign without understanding the language?


You can copy an existing book, word for word (maybe not even that if it uses a different character set, unless you're doing photocopies).

Write a new book without understanding the language? No way - not one that makes any sense. Not unless you're going with the "million monkeys" approach (and if you did try that approach, you wouldn't live long enough to succeed in writing one actual coherent book in the foreign language).

So, we could think about trying to simulate a human brain, neuron by neuron. That's the "making a photocopy" approach. But that's not the approach we're pursuing. We're trying to write a new book (create a new, non-human intelligence) in a language we don't understand (that is, not knowing what intelligence/consciousness actually is).

(Side topic: What would happen if you asked GPT-4 to write a full-length novel? Or even a story as long as the token limit? Would you get anything coherent?)


As a tangent on this, it'd be such an interesting experiment to see how far one could go in deciphering/understanding a new language and attempting to write a new book in that language based on the content of a single, probably fairly long, book.

It feels like it should be theoretically possible, but I doubt it's ever been tried.

Maybe something like understanding aincent languages from limited, fragmented sources is the closest natural experiment we have in practice to have tested it, but it's hardly the same as a full, substantial text in a consistent style.


Of course you can. When the brits first met the Chinese, the Chinese didn’t speak a word of English and the British not a word of Chinese.

The Chinese room thought experiment is flawed; this is exactly how we learn languages!


Congratulations, you have just invented large language models.


IOW you are hardly conscious at all of what the book is about.


without understanding the language, you still understand that it's a book, it has symbols which represent language, and those symbols can be replicated.

we don't understand what consciousness is or how it is achieved, so discussing how to recreate it isn't really a conversation we can have - we can only discuss how to create a simulacrum.


That’s not because it’s some deep philosophical problem, but because people refuse to define what they mean by consciousness.

Do they mean memory? Attention? Awareness? Self awareness? The inner voice? Qualia? All of these can be explained somewhat, but they keep moving the goalpost


I would say it's not because people refuse to define, it's that science has not defined (and cannot define it)


I agree but even if you take materialism for granted, we’ve yet to uncover the exact biological mechanism. It’s entirely possible that it is unique to carbon-based and/or analog brains.


That wouldn’t really be consistent with the laws of physics as we know them. So it would require a significant change in our scientific theories (which is possible, but I wouldn’t bet on it)


I don’t follow you. I’m saying we haven’t discovered any inorganic consciousness, so it isn’t a given that we will be able to create it with digital computers. Not sure how that breaks the laws of physics.


"We haven't discovered any inorganic flying machines, so it isn't a given that we will be able to create them with metal."

Turns out that if you get the structure right you can mimic the effect. Mimic the brain structure in silicon and you will get a similar effect. Just like how birds and airplanes are similar but different.


Programmers seem to forget about how physics is the reason there's np-hard problems.


Humans aren't better at solving NP-hard problems so I don't really see the connection with consciousness here.


Uh, it seems pretty clear that the "exact biological mechanism" is neural networks. Like, literally the connectedness and firing of neurons in the brain.

And no there is no proof that quantum effects play a significant role.


Some people are convinced by p-zombies and The Knowledge Argument that not everything we experience can be reduced to matter interactions.


How is that different than saying the function of the lungs, kidneys and heart can't be reduced to matter interactions? How is the brain special?


You have to Google and read those thought experiments to see why. You may not be convinced (I'm not), but they give good reasons. We have mechanistic explanations for all of those organs, and even if we lack some explanation, we know one is possible in principle. They argue this isn't the case for consciousness.


OK.

Philosophical zombies react to external events in exactly the same way as normal people, including internally, but we are told they lack conscious experience. Thus the thought experiment is set up from the start to find that conscious experience is something non-physical - or else the p-zombies don't really do what they're claimed to do, which is to react identically to everyone else.

There's a dubious implication that conscious experience is completely cryptic, with no effect on the outside world (such as a person speaking the words "I consciously experienced that"), or at least that all such effects are shallow enough that they can be perfectly faked. If this is true, we ought to question why it's such a big deal. What's so great about consciousness? Why associate it with rights?

The Knowledge Argument is about a scientist who learns "everything" about colors intellectually but doesn't see them until years later, and seeing a red tomato is a revelatory experience even after all that book-learning, so it implies that experiences are beyond knowledge, or beyond physics, or beyond tomatoes or something. But really all it shows is that intellectual learning is dry and dusty and limited. Like with the p-zombies, the premise is wrong. The scientist didn't really learn everything before having the experience, but could have done in principle but for the limits of communication, description, and simulation as we know those things presently. (And then the real experience would not have had any surprising or revelatory quale about it.)


> or at least that all such effects are shallow enough that they can be perfectly faked

Physicalism implies that things we wouldn't intuitively think of as conscious can perfectly mimic all such effects.

Imagine there's a person, John, and you take a precise scan of every neuron of his brain (or every particle if you prefer). You also record all the sensory input signals from his neurons to his brain. You write all this information down in a giant stack of papers. Then you go about simulating the brain with pencil and paper, computing its thoughts and actions (in this thought experiment people have deciphered exactly how neurons work). Maybe it takes you a trillion years to simulate one day of John's life, but you diligently do it.

Physicalism tells us that you can simulate John perfectly this way. You could perfectly predict every word is said, and every muscle he moved. You could feed the motor neuron outputs of your simulation into a robot replica of John, and it would act indistinguishably from the original John.

Is this pencil and paper simulation of John a p-zombie?

We either have to accept that this pencil and paper simulation of John is conscious, or that it's a p-zombie.


>Physicalism implies that things we wouldn't intuitively think of as conscious can perfectly mimic all such effects.

This is the multiple relizability argument and it only discounts certain types of physicalism that aren't popular anymore.

I consider myself a physicalism and answer that the simulation of John is conscious.


That's the normal physicalist position as far as I'm aware. Is there a physicalist theory where the simulation isn't conscious?

I was replying to a commenter who doubted that a non-conscious entity could mimic a person accurately. I'm pointing out the implication that without p-zombies, just by writing a bunch of stuff on paper, you created a conscious being. I'm not arguing that's false; just that intuitively it's a tough pill to swallow and if you don't want to accept that, p-zombies are the alternative.


> Thus the thought experiment is set up from the start to find that conscious experience is something non-physical

The point is that if you accept that p-zombies are possible, then you accept that consciousness is not necessarily physical. If it's not necessarily physical, then physicalism is false.

> really all it shows is that intellectual learning is dry and dusty and limited.

What it's attempting to show is the limit of factual knowledge. If physicalism is true, then everything that can be observed must reduce to objective third person facts. But, Mary has all of the objective third person facts. So if you find it implausible that Mary would be able to infer the experience of red before actually observing a rose, even with all of those facts, then you're admitting the existence of first person subjective facts, which cannot be reduced to objective third person facts, not even in theory.

Daniel Dennett has some great responses to these challenges.


Much though I'd be interested in Daniel Dennett's responses, I don't think you understood mine. I'm saying:

Mary doesn't have all of the objective third person facts, only the ones that can be conveyed to her academically.

If you want to sweep this aside with a magic gesture, and assert that she does somehow have all the facts (alright, all the objective third person facts), you are also making the science, communication, imaginative simulation, verbal learning process, all that kind of stuff, into something magical. Because what you're saying is that it now somehow has the power to be exactly like the real experience, which in this magical scenario will thus come as no revelatory surprise to her. We only expect it to be a surprise because of realism about the limits of book-learning as we know it, because she can only learn all that is explicitly known about colors that way, which is not all there is to know about them, and is not even all that is commonly known.


But like the brain, language is Turing-complete. Any information Mary's brain can figure out can also be described in language. If the brain could deduce something that couldn't be described with math and language, it would be doing something outside of our current understanding of physics.

You could respond that even though computability theory tell us it's possible to describe in language, the description would be far too long and complicated for Mary to understand. But I think that misses the thrust of the thought experiment. Even if we imagine Mary being so smart that she could understand and absorb the full written description of the color red, it still doesn't seem like that should be the same as experiencing seeing red. Most people's intuition would be that internal experiences are categorically different than facts.

And also, intuitively, the experience of the color red doesn't seem complex. The dumbest person on Earth can easily experience it, as can a newborn baby with no knowledge to draw on (I guess assuming newborns are sentient). Even a honey bee may be able to experience it. It's such a simple thing that it seems weird to think it's theoretically possible to describe with language, but the description is too complex for humans to understand.

It fundamentally feels weird to think that any combinations of words could ever be the same as experiencing the color.


Oh! That's the argument, "it feels weird".

Yes, it may do. I wonder what that feeling means. All this stuff about "Mary" doesn't clarify the source of the feeling. Often a weird feeling points to a misconception.

I think language could technically fully describe the experience, but it would be an unimaginable language, and that's why the idea feels weird: it's too difficult to imagine. Also as you say a pre-verbal child can get the idea just by seeing the color (though we might speculate that the inner significance and feeling of red evolves over a lifetime). So conveying it in language is a dumb-ass way to go about conveying it.

So various experiences convey complex ideas. How? We don't absorb ideas directly from the world through our pores and palps, we create them through interaction with an existing body of ideas in our minds. How much of that is gifted to us in DNA, a kind of natural set of default ideas about the world? IDK, some, like how the brain contains a body map and the eye does movement detection (which is repurposed as edge detection via a kludge involving tiny eye movements). The brain anticipate salient features of the world being a certain way, like residing in a monkey-shaped creature on a surface with gravity where objects have edges and move around. But some ideas are conveyed, through culture, non-verbally. This may include a lot of the early significance and feeling of seeing that something is red. What is that thing? Perhaps a toy, a shoe, a flower? It's probably a special thing requesting focus, anyway, so the cultural environment is already telling you things about it, and people around you will probably encourage your focussing on it, and that's how communication of the feeling of red begins, I reckon.


Oh! That's the argument. You "reckon" a lot of things about perception here. [Apologies if I misread your tone]

"feels weird" means "seems unlikely" in this context. At the end of the day, that's the most anyone can say on either side of the argument. You can't demonstrate perception works the way you think it works. You're relying on your intuition.

There are some things we can state definitively though.

> I think language could technically fully describe the experience, but it would be an unimaginable language

If it can't be described in normal English, then the brain is doing something beyond what a Turing machine can do. That means it's doing something beyond our current understanding of physics and quantum mechanics. I would call that non-physicalist. You have to choose: either perception can be fully described in English, or the brain is doing something non-physical. Neither option is intuitive, but that's the consequence of the Church-Turing thesis we must live with.


> Mary doesn't have all of the objective third person facts, only the ones that can be conveyed to her academically.

The premise of the argument is that Mary has all of the facts, and you can even imagine that she has a super powerful computer accessible to her to perform any calculations needed. The goal is to point out that it still seems implausible that Mary could infer knowledge of the experience of redness despite having an unbounded set of dry facts about physics and biology, because to most people, qualitative experience seems like knowledge of a different kind.

That said, you're sort of on the path towards Dennett's response.


I think OP is saying that p-zombies are a pointless thought experiment, because the intended outcome is hardcoded in the premise. If you accept the premise, you ipso facto already believe that consciousness is not necessarily physical, and the experiment doesn't change that. If you reject the premise, then the experiment is nonsensical.


> I think OP is saying that p-zombies are a pointless thought experiment, because the intended outcome is hardcoded in the premise

Yes and no. The point is to actually test your intuitions around the premise and the conclusion that must follow, to clarify your own thinking around the intuitive plausibility of the conclusions and premises, hence why these are called intuition pumps. They don't prove anything definitively unless you have intuitions around consciousness that agree/disagree with the premises, but you don't necessarily fully understand your own intuitions until you think about this problem.


>If physicalism is true, then everything that can be observed must reduce to objective third person facts.

You are erroneously equating "physicalism" with "reductive physicalism". It's clear to many of us that qualia are something subjective and non-physical yet emerge from physical processes.


Yes, people are capable of convincing themselves of all kinds of nonsense. The kind of ontological emergence you would require in such a model of qualia is still of a completely different kind than any other kind of emergence, and so still requires extending the ontology of basic physicalism. There seems to be very little motivation to do so at this point. Let's not repeat the mistakes of vitalism.


That seems to be a value judgement about whether consciousness is a categorically different type of "ontological emergence" compared to others.

Of course consciousness seems more mysterious because we don't have the explanation yet. Looking down at the levels we supervene doesn't have the same mystique, though they would be just as inscrutable if trying to look up.

Going from quantum field theory to Newtonian mechanics is just as weird a departure as the reverse. And does Newtonian mechanics prepare you for the existence of Bayesian statistics? Biological agents bouncing around randomly? The gestalt of an ecosystem?

There's no reason to assert that we can't one day have the math for describing consciousness too.


> That seems to be a value judgement about whether consciousness is a categorically different type of "ontological emergence" compared to others.

It's not a value judgment to point out that objective third person facts can logically entail other objective third person facts, but cannot logically entail subjective first person facts as with qualia. That's the whole point of the hard problem. All forms of ontological emergence we've actually seen are the first sort of entailment, but to accept and explain qualia with ontological emergence, you need to accept the second kind of emergence, which is a whole new thing we've never seen and we're supposed to accept it only for consciousness because we have such a hard time accepting that our perceptions on this could be flawed. It's simply more plausible that we're mistaken about subjective first person facts IMO.


well the brain is special because it gives rise to consciousness. lungs, kidneys and heart don't


The brain evolved to be intelligent. Intelligence is the ability to model the outside world in order to make predictions and then choose an optimal course of action. More intelligent beings are generally more "fit" in the evolutionary sense. Hence how humans have conquered all the ecosystems on the planet.

Turns out, consciousness/experience/qualia is an unexpected side effect of this process. We can "feel" the intelligence happening.

Where we humans get lost is believing that it has any meaning, or that we are somehow in control. We're not; the feeling of free will is just a other trick of qualia.


We don't know that either. We may be a collection of interacting consciousnesses arising from different organs that all pass around the baton called "I".


we know that people don't become someone else after a heart transplant for example


There's a lot of nerves and bacteria in the digestive system that might contribute some of it. That and various hormonal glands and reproductive systems.


> Evolution basically says humans are conscious matter.

Evolution states nowhere that humans are concious.


What is a non-theist argument that dictates we should treat synthetic “consciousness” as fundamentally different than biologically derived conciousness?

I think this is a valid question Even if the strongest evidence for machines having “real”consciousness is its external/utility indistinguishability from biological consciousness ?

It would seem to me that a machine that seems to be conscious and professes that it is should then be treated as if it were, if for no other reason than the likely outcome of not doing so - creating “synthetic distrust and enemity” between “conscious” machines and humanity.

It seems like if we are going to ignore the “utility” argument for consciousness, we then must prove that other humans as well are actually conscious, and not just appearing to be so.

Seems like a bad place for the species to go, for a multitude of reasons.


For all we know, it might be that consciousness is not fully contained in the physical structure of the brain. It might as well be something that partially exists on another layer of reality we have no idea about yet.


> For all we know, it might be that consciousness is not fully contained in the physical structure of the brain. It might as well be something that partially exists on another layer of reality we have no idea about yet.

Yes, but not another layer. Just the environment around us - physical and social. Every sensation comes from the environment, our perceptions are trained on this data stream, every value is dependent on environment, our emotions reflect it, language and society are part of the environment, we base our thoughts on language and our actions on other people. Our brain is made from environment signals, just like GPT-4 is made from its language corpus.

The unsung hero of consciousness is actually the environment with its reach data stream and feedback. Consciousness, language, genes, internet, LLMs and the evolution of intelligence are all social processes. They don't make sense individually, only as part of an evolutionary population. They can only evolve if there are many agents.

Now, I know this doesn't sound as sexy as quantum consciousness, but it is a more parsimonious and better grounded position. It accounts for the data engine that actually creates consciousness. Don't be looking for consciousness inside the brain or in exotic physics when the magical ingredient is outside.


> Yes, but not another layer. Just the environment around us - physical and social. Every sensation comes from the environment, our perceptions are trained on this data stream, every value is dependent on environment, our emotions reflect it, language and society are part of the environment, we base our thoughts on language and our actions on other people.

This is also my understanding of consciousness. We are overly focused on the human brain's capabilities as a generic information processing mechanism (because of its remarkable adaptability), that we ignore how fundamentally dependent it is on its environment, leading us to seek a metaphysical explanation for its functioning, when it's actually all around us.

Language, culture, and abstract thought capabilities that we use to describe consciousness are symbolic overlays upon the physical environment, but ultimately emerge from it, and their objectives ultimately tie back to it.

Human consciousness - especially its group/cultural aspects - has been a tremendous advantage to the species, which is why we are even in a position to be fascinated by it today.


True. Which leads one to wonder, could those same kinds of consciousness that inhabit us also be capable of inhabit an AI. Or even, could there be other kinds of consciousnesses in those or other dimensions that are not currently able to exist on earth because there doesn’t exist anything yet that they are able to inhabit. But that our computers would eventually, when we arrive at some specific combination of hardware and software, enable them to inhabit those. Bringing a new kind of consciousness from outside of the universe to earth that is unlike all others outside already present here (in us, the animals, the plants, etc).


Is it still an "artificial" intelligence if it's made with real consciousness, though?


Intelligence != consciousness


Agreed on this to a certain extent. Right here on earth in the present day and without taking into account innovative technological advances in AI, we have examples of highly sophisticated intelligence (at least in a functional way) belonging to things that have little or no known consciousness.

First example, large hive insect nests, such as those of termites and certain ants. Their internal construction is extremely complex and often built in useful (to them) ways that would challenge even the abilities of smart human engineers trying to do the same with similar tools (sharp digging instruments, organic cementing liquids, raw scavenged building materials and nothing else) Yet any individual termite in a nest of millions shows only, maybe, the most minuscule and debatable signs of consciousness. They instead act like biological, physical pieces of an algorithmic process.

Second, obvious example, evolution itself: Here we have a process that produces organic, biological systems of such complexity and self direction that we with all our cognition are barely capable of grasping them robustly let alone emulating any major part of them, yet it's entirely mindless. Sure, it has billions of years to do its thing through the imperatives of brute survival mechanisms, but it's still incredible how on a macro scale none of it involves the least bit of known cognition.


Yeah


Human consciousness has logically to be a product of the physical structure of human beings.


To be fair, there is absolutely nothing within known physics that would explain why we're more than a complex biological computer and how subjective experience and qualia arise from it.

So we genuinely cannot even begin to guess as to what actually imbues consciousness into our neural processes. It could be anything from "it's actually just the physical processes" through "there's a soul piloting our brains by influencing quantum noise" to "the brain is basically an antenna for our metaphysical self and death/disability is the loss of connection"


There's currently no reason to assume that we're more than a complex biological computer - while it's indeed interesting to explain how subjective experience and qualia arise, it's certainly plausible that this can arise as emergent behavior once a specific type of computer is doing a specific type of computation, and we 'just' need to study that complex computation.

Unless we obtain any evidence whatsoever that this can't be the case, Occam's razor would suggest that this is the hypothesis to explore, without looking for new physics or other unlikely assumptions.


> why we're more than a complex biological computer

Are we more than biological "computers"?

I think we are complex biological systems that we do not fully understand yet but that does not mean anything supernatural or beyond our understanding of physics is at play.


Except the "physical structure" might extend into dimensions we don't know about.


Then this is either general physics theories about extra dimensions that apply to the universe as a whole probably applied to quantum physics in this case, or it is random quackery.


How about energy that's associated with the physical structure?


that's the kind of musing that leads you off into the weeds. Sure, anything is possible, but science is useful for observing, cataloging, and utilizing knowledge that is provable or at least "approximately" provable to the point it's useful. Spiritual talk of what the mind "is" really doesn't have much of a place when talking about AGI, I don't think. We need repetitive outputs for given inputs into a system. I can make up "what might be" all day long, but it's not really useful, unless I can prove it or use it.


It might be a pink elefant that lives in the 7th dimension and contains every person's consciousness in the shape of a magic peanut. Or it may not.


Legitimately made me lol. Thank you for being the rational one. When topics of the mind come up even people who are normally smart and rational can turn into quacks.


Topics of the mind are self-referential and so make most people self-select into three groups, in order of decreasing religious fanaticism:

1. Monistic physicalists. Here because of the school. We studied physics so everything looks like a nail: reality is made out of atoms, consciousness is an illusion, etc. The most vocal and fanatic crowd in tech.

2. Cartesian dualists. Soul vs. body, etc. Here because of the church. Surprisingly, there can be some degrees of rigour here. In tech circles, members are on average a bit less reluctant to switch to another group.

3. Monistic idealists. Here via critical reasoning, so could switch to another group if there was a good reason. See not only topics of the mind as self-referential, but every other topic as implicitly mind-referential.


There is nothing religiously fanatical about seeing that everything in science appears to be explained by our laws of physics, and concluding that human consciousness probably is, too. In our history we've seen time and time again that humans will believe in magic and higher powers and every single time it's turned out to be just physics, just biology, just chemistry...

And here we have something mysterious and poorly understood (and poorly defined...): consciousness. And again as always the quacks line up with their magic to explain it all. Higher dimensions! Quantum effects! We're all in the matrix and only our minds are real! Even though we can clearly see that so many mental processes can be influenced with drugs, with hormones, with magnetic fields, or with a scalpel. Everything points to yet another physical thing that we'll hopefully understand and explain soon.


> In our history we've seen time and time again that humans will believe in magic and higher powers

Indeed; and these days in tech circles it is the magic of atoms (or strings, or whatever entities you like). The fact that physics by design makes no claims about objective existence of anything does not stop them from thinking that somehow all those entities (which are merely metaphors, ways to create models that are legible to our minds) must exist in some fundamentally objective sense; neither does the fact that we cannot directly access them other than via consciousness, or that the entire idea of empirical observation hinges on the existence of the observer.

> Everything points to yet another physical thing that we'll hopefully understand and explain soon.

It had been explained [away] already—since you appear to hold physicalism, you may also like illusionist theories of consciousness.


Soul fracture theory?


I saw that pink elefant once after a lewd night of heavy drinking and potential hallucinogenics. His name is Frank, and he says hello!

/s


It's no coincidence that Frank has his own beer brand: https://en.m.wikipedia.org/wiki/Huyghe_Brewery, so that we may commune with him.


Once I asked a KI and this was the answer:

> In a parallel universe where pineapples are the dominant species, intergalactic pizza deliveries are made by flying spaghetti monsters riding unicycles made of marshmallows. Meanwhile, sentient clouds debate the meaning of life with philosophical penguins on top of rainbow-colored mountains made of bubblegum.


Gödel’s incompleteness theorem says that pink elephants may indeed well exist.


That's really not what it says. Godel's incompleteness theorem applied to AI would say something like 'There are statements about the model's behavior we cannot prove without relying on statements that cannot to be proven' (this is because obviously the AI algorithms are based on elementary arithmetic).


The theorem applies beyond elementary arithmetic to any formal system. If we try to be rigorous in that way with our models of the universe, the theorem would say that no matter how formally sound our models are they will have to be incomplete, incorrect, or non-provable; that is, we would not be able to rule out the existence of consciousness or figurative pink elephants.

(And if they are not formally sound in that way, our models can be safely considered to be incorrect/incomplete simply because they are not formally sound in that way.)


no it doesn't.


This is only true if you reject the pink elephant argument as straw man in the first place.

As long as you consider it a figure of speech (a charitable interpretation indeed), then existence of pink elephants (or ghosts, or what have you) is exactly the implication of the theory in context of scientific method.


Gödel's theorem deals with the axioms of mathematics. the scientific method deals with the physical universe. a small part of mathematics is useful in describing the universe, but most of it isn't.


The theorem is about formal systems in general. You may want to take a look at https://en.wikipedia.org/wiki/Scientific_method#Science_of_c..., though I recommend to read, for example, GEB to inderstand the implications of the theorem.


I thought we did know this now. I can't quote exactly what science I read, but the science latest I saw showed consciousness arrises in the nerves outside the brain.


I've heard reliable reports of Jeffrey A. Martin and his fellow research participants moving visual perception out of the eyes and the rest of the body into the surrounding environment. Literally seeing behind him. I don't think his eyes stopped working, he just moved his subjective awareness outside of his body.

I found it hard to believe until I used his techniques to contain 'all of reality' into a small space (the visual field) and then 'move' it to my chest. Feeling like music, sight, and body sensations are all happening within a tiny space-contained field is a very startling experience.


There's always 'reports' of people being able to do these magical things that cannot be explained through modern science. And yet challenges such as these https://en.m.wikipedia.org/wiki/List_of_prizes_for_evidence_... never seem to have winners. It seems that these abilities inherently come with the luck of never having financial problems that could me resolved with a generous cash prize.


Jeffrey has absolutely no need of a cash prize. Dude's rich as balls.

And yeah, there absolutely is something to that. Independent wealth totally frees you up to go to the crazier places.


You've heard "reliable records"? Can you link it/them? I can't find via search engines, I tried.


Reports, not records. People who I know who know him sharing privately. Jeffrey doesn't and can't publish everything that he comes across and has to be very circumspect with the stuff he does.


ROFL


If we are talking about intelligence or "agency", then I don't see why something other than carbon can do it.

But if we are talking about "there's there there", then I am not sure if we will ever find an explanation for this. Let's say that X is what gives rise to "there's there there". What does this tell me? Nothing.

This is perhaps the most interesting, most important, and most baffling mystery of the universe. It requires no equipment other than one's own mind. Esoteric contemplatives have spent millenias on this and none of them have the answer to the ultimate question.


I am hopeful that advances in brain-computer interfaces will start to provide a partial answer to the question of "what's there" and why it's there. It seems to me the ability to controllably augment one's own consciousness with precision will tremendously clarify the necessary ingredients for consciousness.


Neuroscience has already done lots of investigations as to what's there and why. We know which structures in the brain do what (including "consciousness") at an increasingly fine level. We can observe all sorts of brain disorders and dysfunctions and their effect on consciousness. You can do drugs yourself to alter the ingredients of consciousness.

I think people just don't like how boring the answer is.


No, I don't think you understand how fundamentally hard the question is. See the hard problem of consciousness[1]. When you think about gravity, you can imagine a universe where gravity is reversed. All of the physics seems mechanical, or probabilistics or whatever. But "there's there there" is a completely different phenomena that I think we will never have an answer for.

There doesn't seem to be a continuity, either something is there, or there isn't. You can be drunk, hallucinating, feel extremely dizzy, trapped in a vat, trapped inside another universe inside vat, trapped as a figment of reality of other beings, but the fact that "there's there there" is binary. It is something that cannot be divided or peeked into. A kind of fundamental atomic property.

1. https://iep.utm.edu/hard-problem-of-conciousness/


I understand the hard problem. There clearly is a continuum of "consciousness" from simple insects on up to primates & cetaceans, and the complexity of that consciousness is correlated with the structural complexity of the brain. Creating minds is just what brains do.

Anyway, I was specifically responding to the parent comment statements about the research needed, pointing out that we already have it.


I don't think neural correlates of consciousness have been identified, so clearly there remains research to be done. I'm not in touch with the neuroscience literature, but acknowledging the hard problem of consciousness means one should accept there is a lot of work remaining. That being said, I believe the hard problem is surmountable. In my mind the situation is similar to computer science before Turing's description of the Turing machine: there were imprecise notions abound about what computation meant that needed to be clarified through a concrete model. My view is simply that finer control over conscious experience would aid understanding enormously. But you're right, I should probably skim the real research more.


I think the main ingredient is a living being.


Why?


Because the evidence strongly indicates so: we haven't found a single instance in the entire universe of a conscious entity that wasn't also a living being.


And 200 years ago only living being could fly, but now we have machines that can fly higher and faster than any bird, even to Mars. But back then you could have said flight requires a living being because that's all we've ever seen.

A hundred years ago computers didn't exist and now they've been beating us at chess for some time. That you haven't seen a conscious computer yet in no way proves that it's not possible is principle.


I never said that the evidence proves that conscious machines are impossible. I said it strongly suggests that it's impossible. Also I disagree that air-planes can fly, for the same reason that I don't agree that boats can swim. The term "can" implies agency, and machines don't have that. All the technological progress during the last 200 years doesn't appear to have brought machines any closer to having such capability.


We had really better hope that biological brains are fundamentally "bad" at consciousness, because if a human brain is as efficient at generating consciousness as other biological brains are at doing much older, more refined tasks, we're totally fucked. For example, a dragonfly brain takes visual input from thousands of ommatidia and uses it to track prey in 3D space and plot intercept vectors using only sixteen neurons. How many transistors does a computer need to do the same thing? Now scale that to however many neurons and synapses you guesstimate a human brain needs to create consciousness. The numbers are bad news.

There's little doubt that eventually we will be able to design a computer version of a human soul. Whether that sapience fits in a computer smaller than a zip code or thinks any faster than a person is an entirely different question.


I really do hope ASI gets us to gradual replacement-based uploading. Having something as glorious as sapience trapped in a delicate and temporary bag of flesh kinda sucks.


Nothing particularly glorious about sapience trapped in a permanent bag of flesh especially one with a rather fixed view about the world about it. Never mind dictators; how about:

'The President of Global Enterprises Inc. is today 150 years old and currently holds the record for the longest serving president in the Company's history'.

A cause for celebration?


I don't really care if stubborn, old leaders live longer if it means I don't have to worry about my loved ones dying and get to see humanity reach the stars.


Be careful for what you wish for. There’s no guarantee immortality doesn’t result in eternal suffering.


Humans generally don’t like eternal suffering, and will fight against it. That’s a reason to think it won’t happen.


I don't like being crushed by rocks but I can fight against it all I want, a rock that's heavier than me will crush me.


Wait, where did I say that immortality would be mandated!? Of course you'd get to choose when you want to shuffle off the mortal coil.

For me, personally, I don't see that being the case for many hundreds or even many thousands of years at least. I'm nearly 30 and I feel like I've barely lived a blink of an eye.


I feel like that would require society to get pretty chill about suicide. In a world where everyone is immortal-by-default, I can actually see the opposite happening - the fear of death increasing from its absence, and becoming an even greater taboo.


I think such a society would be chill about suicide. An exploration of this idea is found throughout the "Culture" novels by Banks. It's okay to decide when you want to clock out. It's certainly better than being forcibly clocked out against your wishes.

I agree it would be weird at first, but after the first thousand years or so, it would probably become common enough that people wouldn't mind, and the taboo would disappear. Weird, maybe, but at least people will get to live their lives to the fullest and at their own pace.


I m still conflicted what we do with the bag of flesh after the fact. Are we bag holding forever?


what do you mean? the bag is useful for going around, but it's frail and needs replacing every few decades.


a $1000 drone is much more useful for going around


Something tells me the bag might change its mind about the whole operation once the upload is finished and it realises what's up next.

So yeah not sure I like the idea.


So far the only idea that I've read about and might feasibly result in uploading instead of copying is gradual replacement of each cell by a nanobot simulating that cell. So at the end of it, there'd be no bag to change its mind.


You can simplify it a bit. Yes, gradual replacement is likely the way to go, but you probably don't need to replace individual neurons one at a time. Individual neurons don't really matter or meaningfully contribute to our consciousness.

You can likely replace the large "functional groups" of neurons instead, with the group size threshold being the maximum that doesn't meaningfully affect our consciousness. This might well be many billions of neurons at a time.



Hence the "gradual replacement" part of my comment :)

Scan and copy never made sense to begin with, as you point out. Not sure why it's the default when people think of mind uploading.


People seem to imagine it like plugging a cable or getting an EEG cap and then they can't imagine the next step. Gradual replacement is a very radical idea to most people - a friend of mine recently described is as a horror movie.


In this book, I think: https://www.ebay.co.uk/itm/393899945799 - 108 Tips for Time Travellers by inventor and Professor Peter Cochrane, 1999, one of the essays is him asking his wife if she would still love him if he had false teeth, a false leg, etc. bit by bit until she stops the conversation saying "I'm not having you dying by installments!".

When you replace a heart with a pump, you don't get a human heart. When you replace a kidney with a dialysis machine, you doon't get a human kidney. Why expect that when you replace neurons with simulations you get a human brain or a human consciousness, or when you've replaced everything, a human?

Biological replacement, your body growing more new neurons, maybe, but it won't be mind uploading. And it won't get you you-at-age-twenty back.


To be honest, I don't really care if I get a human brain or not as long as I stay conscious.

I – and I'd wager most people – would be fine with some level of alteration to their being if it meant they wouldn't literally die, and that they would get to spend some more time with their loved ones. Change is a part of life, and this would just be another one.


It's arguable that you would literally die and be replaced by a machine-puppet pretending to be you; see the short story about that I linked in another comment in this thread exploring this topic: https://news.ycombinator.com/item?id=40407969

There is no technology I can see which would convince me that a mechanical replacement hand was my hand and my hand was not literally dead. Only Prof. Michael Levin's work of regrowing real limbs might do that - see https://www.popsci.com/body-electrician-whos-rewiring-bodies... and he has given many video talks, I saw one of him presenting at Facebook about his work triggering regrowing of frog's legs, but there appear to be many more videos of him - TED and Lex Fridman interviews.


That's pretty cool! At the same time, I think I would take the risk if I was on my way out anyways.


Let's assume that the replacements are perfect replicas from the outside, only the inside is different. Why would it not work?

The examples you listed are current technologies that don't remotely approach the primary nor secondary functions of the originals.


> "Why would it not work?"

As a thought experiment I'll go with "atom by atom replacement would still be you". But I won't easily go with "therefore we modern humans could actually do it in reality, and replace the parts with fakes, and then with simulated fakes, and then extract the simulation to run elsewhere for the uploading part, and even God couldn't tell the difference" just by asserting all those extra bits to be true. When have we ever made perfect replicas with the inside the same, of anything non-trivial at any scale?

Physical impossibility (it would work but we can't do it) argument, see here for long discussion of how hard the engineering problems of just Neuralink are: https://waitbutwhy.com/2017/04/neuralink.html

Physical "doesn't make sense" argument; we model brains as 80 billion neurons but bodies are not neatly separate organs and neatly separate pieces. The brain isn't separate from the body - nerves are basically brain matter spreading down to your spine, guts, arms, legs. Are 80 billion fake neurons surrounded by layers of scar-tissue reconnections going to be the same? What sense does it make to suggest a simulated sperm could mate with a real egg?


The physical engineering problems may be overcome with time. Billions of neurons - well, arguably you could replace more than one at a time, up to the largest "functional group" or threshold that doesn't comprise you.


“The equation must be balanced!”


Before we can answer that question, don't we need a method for determining whether an entity is conscious?

As far as I know, such a method doesn't exist yet. We have no way of verifying if other entities are conscious.

I'd settle for a more mundane goal than "consciousness:" reliable machine intelligence.


What is this obsession with making computers human-like?

We can put those machine to better uses than forgetting, having biases, making subtle or gross mistakes (hello, ChatGPT) like we do.

The AI that some apparently consider top-notch today is only good at powering NPCs in video games. The more serious the use-case, the more harmful it becomes.

Moreover, achieving artificial consciousness is only good at fueling endless debates about whether it fakes it perfectly or it is the real thing.


I think there's practical value to knowing what would make a very complex AI system conscious in order to explicitly and intentionally avoid it - we want to create powerful artificial systems to do all the tasks that humans don't want to do, but there's no reason for those systems to have the capability to experience suffering by being 'enslaved' to do all these tasks; arguably consciousness isn't necessary nor useful for any of these tasks[1], and unless it turns out to be an unalienable side-effect of some critical mass of capability, we'd rather not have these systems be conscious.

[1] I can imagine a few roles with relationship-building where expressing consciousness could be useful, but IMHO for all of them faking consciousness would be far preferable than actually having it.


Interesting take. This reminds me of Blade Runner, which, I read, takes place in 2019.


We're kind of failing by making AI that automates the things humans want to do.


> What is this obsession with making computers human-like?

Replacement of human labor on the cheap, the most capital intensive part of many businesses. Alternatively as a productivity multiplier of human labor.


"Cheap"

OpenAI loves to talk revenue but I'd like to hear more about their unit economics.


Writers of all types. Not a complete replacement but if you can get away with half as many + AI, that’s substantial savings.


I'm skeptical that AI can actually replace writers like ai people seem to think it can. In marketing, I don't think the actual writing is really that much of the cost compared to paying Google and meta and serve the ad, unless you're doing something crazy like fully personalized ads, which I'm not convinced are actually better considering for creepy they are. For writing a novel, I've not heard of any books that are mostly written by AI that people actually want to read and most authors I've seen speak about it seem to hate the technology. And again, the unit economics and market make me think it makes more sense to invest in making a better novel than trying to pump out ai garbage in a winner take all situation. I don't really get it for technical documentation either, where the thing you're really trying to get across is human understanding. The point is not to just pump out garbage, it's too actually understand something than translate that understanding into words. I'm not convinced LLMs can do that right now.

The one area I've seen it work great for in meeting transcripts and summaries, but that's not a convincing use case if you're losing money every time a user uses the service.


I'm kind of worried about virtual assistants, teachers, etc ruining a generation of children who think they are "real people" as much as anything else I've heard about AI. Adults can (usually) discern reality much more easily than children. Some of the assistants are getting these very "emotional" sounding voices and it's freakish to me. I don't need Siri to sound like a doting girlfriend, I just need it to play a song, fetch some information, pull up my schedule, etc.


I think humanity is afraid of being alone.


Damn straight. If space travel is so bloody hard, I'll just make the damn aliens myself.

Related, but LLMs are the ultimate example of humans following the tradition of the Timaeus and imitating the demiurgos - the divine crafter, as the gods did when they crafted the demigods, as the demigods did when they crafted us, and as we do when we craft machine intelligence. Neo-platonism engaged with this idea a long time ago.


Agreed. The calculator is not human-like, let's figure out how to humanize this! Intelligence comes in a myriad of ways. Mimicking humans will just produce a human, what's the point? I am not even sure a super intelligent "human" is even a good thing. The smartass will probably spend their intelligence betting the stock market.


The point is actual cognition. Without consciousness there can be no understanding which stunts the ability of AI and makes AGI impossible.


That doesn't prove anything, but from an etymology perspective both "conscious" and "cognition" mean "know together". One can easily spot the common "co-" prefix which carries the "together" part, but the main roots derive from different verbs which have respectively the alternative meanings of "distinguish, dissect, split" and "recognize", respectively.

As often, the definition of "recognize" is definitely circular so it seems that it is the "real root" for "knowledge", while "conscious" seems to have an "analysis" flavor.

IMO, "understanding" is synonym to intelligence, and the ability to analyze is one keep part of it (another is the ability to recombine). So it seems that we can agree that "without consciousness there can be no understanding", although we moved away a bit from the common meaning of "conscious".


We need smarter automation for Star Trek communism utopia to be feasible.

Having biases or making subtle mistakes is not really desirable, it's just that we haven't figured out how to have something that can reason without those things.


We need a stronger focus on ethics and being decent to each other.


Yes, the beauty of nature is, there is no magic. Everything is governed by laws, when we uncover it, we can replicate it.


The more i've studied i've come to the opposite conclusion.

Nature is >99% magic and though some tiny slices of human friendly interfaces of reality are replicable much more is chaos, weird emergence, fields, probabilities and stuff so bizarre to our mammalian logic that we might as well call it magic, god, the simulation or just bleeding edge physics as the whole field is getting weirder and weirder.

The whole notion of natures beauty stemming from some replicable, controllable and "no magic" scenario is a very "homo sapiens" desire for order and control.

We know close to nothing, and therein lies the beauty in my eyes.


At the lowest levels, with quantum weirdness (to our way of thinking), yes, we can only create metaphors to try and don't really understand it. Same at the extreme other end where relativistic effects can't be ignored.

But we don't live at those levels. That low-level unpredictability usually is statistically predictable at the macro level where we live. "Coloric" doesn't exist, but it is a perfectly usable concept. There is no need to actually measure the position and velocity of every molecule of gas an a balloon to understand its temperature.

So, to us, the world is 99% magic at the extremes, but <1% where we actually live; we can understand this regime fairly well.


i have moved to this camp as well, and i don't mean in the "we don't understand it so we call it magic", i mean it seems more and more like actual magic.

people are talking about timing attacks on state updates in the universe, hopefully we can exploit it


> people are talking about timing attacks on state updates in the universe, hopefully we can exploit it

If the universe did happen to be a simulation (as opposed to just naturally holographic), I imagine exploiting it might be the only way to conclusively prove so. As an actual simulation, there would be a risk of someone and/or something observing it. If intelligence in our universe tends to eventually discover exploits and if the observers aren't fond of simulation errors, we might have ourselves an unexpected answer to the Fermi paradox.


>Everything is governed by laws, when we uncover it, we can replicate it

Sure, but it's precisely because everything is governend by laws that you can't make it how you want. It's perfectly possible that consciousness is a specific property of organic brains rather than digital computers. I can understand the laws that govern the properties of the Golden Gate Bridge, doesn't mean I can build it out of jello.

That was precisely the misunderstood point of Searle's Chinese room by the way, that a digital algorithmic computer can emulate the work that a human mind can do even the point of being indistinguishable from it, but need not understand any of it (i.e. being conscious of it) while doing so. Or put differently that manipulation of syntax and semantics are completely orthogonal.

That's in fact very relevant in LLMs. An LLM can talk about how strawberries taste as if it was conscious, but by definition it can't genuinely have experienced it.


But organic brains aren't some magic unknowable mush, we know how they work on a low level, we can trace how visual processing is done, etc, etc. As far as we see, the raw computing capabilities of biological neurons that we understand are sufficient to explain the behavior of animals, and as far as we see, biological brains don't do anything that can't be replicated with sufficiently powerful digital computers. So while it's technically possible that consciousness is a specific property of organic brains, we have no evidence at all that it would be the case and some (although not conclusive) evidence that there's nothing special, so unless we identify some difference, those hypotheses are not comparable and we should assume that according to our best current knowledge there aren't any specific properties of organic brains.


You can simulate the Golden Gate Bridge on a sufficiently powerful computer, though, such that the simulation will behave exactly like the real thing.

Chinese room is a good example of begging the question, since the postulate that "understanding" is somehow distinct from observable behavior of the system implies the outcome. But from a materialist perspective, Chinese room, considered as a whole, does understand what it does. The fact that the man inside of it does not is simply irrelevant to the question.


>Chinese room, considered as a whole, does understand what it does. The fact that the man inside of it does not is simply irrelevant

Searle does address this point even in the original paper. That argument doesn't hold water because you can imagine taking the whole room and putting it in your head and then you still don't understand Chinese. Or put differently if you're a Mandarin speaker and we two sit in a room and I use you to secretly translate, you understand the meaning of what is being said, I don't and it doesn't mean anything to say we "as a system do".

The point is that even though we can "as a system" behave as if we speak Mandarin, there's a difference between you and me. You understand what you're talking about, and I just hear gibberish. Searle is a die-hard materialist by the way, nothing of that violates materialism. What he isn't is a functionalist. What he is teasing out in the thought experiment is that a system that produces the same output as nother system does not need to be equivalent on the inside.


That is still nonsense. If you take the whole room as a system and integrate it into your brain as a subsystem, then yes, you will understand Chinese.

The reason why we have to speak of the room and the person inside as a system is because the real magic is in the instructions. The fact that they are performed by a self-aware human is completely irrelevant to the setup and is only there to confuse the matter.

In your other example with two people, viewing them as a system doesn't make much sense because one of those people is redundant - you can leave just the person who speaks Mandarin, and that is sufficient for the whole to function. So they alone are "the system". And it also operates based on instructions, except that those instructions are stored in the person's head and executed by low-level processes in the brain.

Searle believes that consciousness cannot be simulated as a digital computation, period. Given that any other physical process can be, this requires a belief that consciousness is somehow magically different from any other physical process in some unspecified way (that appears to be conjured out of thin air solely to make this one argument, at that). That is not materialism.


>The fact that they are performed by a self-aware human is completely irrelevant to the setup

Granted, but we can easily create a more clear example that addresses both of your objections, right? Say in the room is a colorblind person with a machine that detects properties of color.

If someone now asks you questions about colored objects you can answer them, but I assume you grant that neither the colorblind person, nor the machine, nor the two as a system have conscious experiences of color vision as you have. The conscious experience has nothing to do with function. Every physical property you can describe without necessary experiencing any of it.

And I don't think your assertion about Searle's belief is correct. (or at least I don't believe that). If you fully simulated a physical brain, down to the atom, I think the experience in the simulation is probably equivalent to the experience outside. But if you merely model outward functions of conscious agents, behavior that is, there's no reason at all to assume all those systems must be conscious or have experiences.


> If someone now asks you questions about colored objects you can answer them, but I assume you grant that neither the colorblind person, nor the machine, nor the two as a system have conscious experiences of color vision as you have.

It really depends on the setup. If the system is primed with knowledge of what color various things are (so e.g. it can say that grass is green because it is in the knowledge base), then, no, it does not experience color vision. It's just regurgitating facts.

On the other hand, if you actually have some kind of sensor that is capable of perceiving color, and you provide the output of that sensor to the colorblind person inside the room, who interprets the signals (say, represented as numbers) according to the rules, and those rules result in the system as a whole being able to say things like "apple is red" when presented with a red apple, then yes, I would in fact argue that the system does consciously experience color vision.

> And I don't think your assertion about Searle's belief is correct.

Searle claimed that computers "merely" use syntactic rules to manipulate symbol strings, but have no "understanding" of semantics, and that Chinese room demonstrates that this is not sufficient for consciousness. This was not just about correctly modelling outward functions, though - quite obviously, the room has a lot going on inside, and of course you can model neural nets without physically simulating neurons, either. Quite frankly Searle's attempt to make some kind of qualitative distinction between biology and computation is nonsensical, because it's the same physics all the way down, and it is all representable as computation.


The opposite. Laws are just conceptual representations of underlying intractable processes.


The "maths is discovered not invented " camp


When you go even further, math is again invented to represent The Underlying. If it were discovered, math would be it.


This assumes we have the capability of uncovering those laws.


Plot twist, there are no laws except those that we collectively imagine.


In the sense that science is ultimately a religious enterprise expressing our belief in a constant, unseen, unchanging reality (which is not a universal belief by any means, and is utterly religious in nature), then yes, this is true. On the other hand, if we take an irreligious look at things and try to keep focus on just what we observe, one is forced to make the opposite conclusion, as the very basic building blocks of reality are not uncoverable, non-replicable, and seemingly not governed by anything but randomness.


Science is based on the hypothesis that stuff that worked yesterday will work the same way tomorrow, and it tries to discover the rules that govern the stuff. There is nothing religious about it. It's a safe assumption to make; if there are no rules or the rules can change in arbitrary ways, science should be able to detect that, too. We've seen enormous scientific progress throughout human history, and based on this history I think it's safe to say that it's the best way to learn about our world that we've discovered.


> Science is based on the hypothesis that stuff that worked yesterday will work the same way tomorrow

If you believe this assumption, you're believing a statement on faith and faith alone. I agree with you. I absolutely believe stuff will work the same tomorrow as yesterday at some level of abstraction.

> science should be able to detect that, too

Hmm... it might. It might not. For example, if 'god' (for lack of a better term) decided to make all spin states that he was going to make spin up instead be spin down from here on out, we would actually have no idea, despite those two 'states' of the universe certainly being different.


What do you mean by the basic building blocks of reality? The very machine you are posting your comment from can only be manufactured because the laws of physics don't change, and these machines and their manufacturing process operate on the atomic level. Similarly, do you have an example of a well defined experiment that would not produce the same result consistently? You can win a noble prize easily by publishing such an experiment. Lastly, if someone did produce an experiment that did not produce consistent results, that is, an experiment performed twice with all variables staying the same, but the result of the experiment being different, then the theory that all well defined experiments are reproducible would be wrong. It isn't axiomatic.

>try to keep focus on just what we observe

That's all science is though – making observations. Writing hypothesis and making experiments are etc. are just a means to creating things to observe. I'm curious, what did you observe that you felt was not bounded by some static law of nature?


> What do you mean by the basic building blocks of reality?

I was thinking photons which, when passed through a diffraction grating one at a time will cause an interference pattern on average but whose individual course is -- as far as we know -- unpredictable.

This flies in the face of the idea that the universe is perfectly predictable.

> The very machine you are posting your comment from can only be manufactured because the laws of physics don't change, and these machines and their manufacturing process operate on the atomic level

Indeed.. an axiomatic religious belief that has incredible amounts of evidence and that has proven very useful, but for the reason I mentioned above, certainly could not be the case. We shouldn't confuse the laws of statistics either certainty, even if the law of large numbers usually works.

> Similarly, do you have an example of a well defined experiment that would not produce the same result consistently?

I mean there are hundreds of them at this point. Passing single electrons or photons or buckeyballs through diffraction gratings (humans too!, we think). The stern Gerlach experiment. I can go on. Determining the individual spin states of entangled pairs of particles.

> Lastly, if someone did produce an experiment that did not produce consistent results, that is, an experiment performed twice with all variables staying the same, but the result of the experiment being different, then the theory that all well defined experiments are reproducible would be wrong

Well like I said we have lots of experiments whose results are different and who we cannot predict (some of which the math says we can never know), and yet, at the macro level we do science anyway because of our religious belief that it usually is okay.

For someone who purports to defend science, I'm shocked at the level of ignorance yet arrogance in this comment.


It is only unpredictable if you confine yourself to a single branch. If you embrace Many Worlds and simulate things accordingly, creating branches as needed, the result of such simulation is consistent and reproducible.


Many worlds is a completely unprovable phenomenon


It is not a phenomenon; it's simply one way to model reality - and arguably the simplest one because it has no woo like "observers" and "collapse".


Wavefunction collapse has no 'woo'. It's simply the application of a non-commutative operator. Many worlds has 'woo' because it supposes the existence of infinitely many unobservable worlds. Thomas Aquinas question of how many angels are on the head of a pin seems to be a more fruitful exercise. At least the head of a pin can be given a quantified size. On the other hand, no one can tell you where these multiverses exist. How they exist. What measure they occupy, or anything else. They are as unreal as angels. Unless you believe in them (which is fine, just admit it).


Everyone admits to being unable to predict individual quantum interactions. My point is that above the quantum level these interactions don't have much of an effect. The reason that I didn't assume you were talking about these interactions is because you seemed to be rejecting scientific observations altogether. The context of your comment was in replying to the comment that "everything is governed by laws, when we uncover it, we can replicate it." Your response didn't read at all like you were talking about quantum interactions because you could have just said that. In fact it is usually common in such threads to reply "but what about quantum interactions?" to such comments. The way I read "a religious enterprise expressing our belief in a constant, unseen, unchanging reality" was that you disagreed that the equations in physics textbooks will change at some point. The reason I thought that was the context was "Everything is governed by laws" so I thought you were saying that the laws are not unchanging.

Now reading your post, you seem to actually agree with most current scientific understanding. You don't actually seems to be saying that the laws of physics will change, Ask any physicist if some classical interaction will certainly happen the way it should. They will not say that it will happen with 100% certainty but rather that it is extremely likely that it will happen that way.

>an axiomatic religious belief that has incredible amounts of evidence

ie. not axiomatic.

>and yet, at the macro level we do science anyway because of our religious belief that it usually is okay.

I'm confused by what you are saying here. You agree that we can understand quantum interactions within "the laws of statistics" even if we can't know them for certain. So why would we change what we do on the classical level based on that? You can just add "extreme likelihood of following this equation" to every classical equation. How would you do science differently?

Additionally, what claim do you think is only being held together by the idea that the rules of physics are constant?

It seems like what you really might mean is "due to quantum level interactions, an inorganic object that reproduces the same classical effects of the brain will not have consciousness." Is that correct?


You should really be doing a bit more defense here given how completely off base your initial comment was, but whatever.

> Everyone admits to being unable to predict individual quantum interactions.

No.. not everyone, including you, before this comment. You asked for experiments that do not produce the same results, and I gave you several examples. That in and of itself refuted your initial arguments.

> My point is that above the quantum level these interactions don't have much of an effect.

Really depends what you're talking about, it's not true that every macro interaction has no phenomenon that relies on quantum mechanics. In other words, classical physics cannot explain several macro phenomenon.

> The reason that I didn't assume you were talking about these interactions is because you seemed to be rejecting scientific observations altogether.

That's your assumption not mine. If I were you, I'd think about why I made it.

> everything is governed by laws, when we uncover it, we can replicate it.

> Your response didn't read at all like you were talking about quantum interactions because you could have just said that. In fact it is usually common in such threads to reply "but what about quantum interactions?" to such comments. The way I read "a religious enterprise expressing our belief in a constant, unseen, unchanging reality" was that you disagreed that the equations in physics textbooks will change at some point. The reason I thought that was the context was "Everything is governed by laws" so I thought you were saying that the laws are not unchanging.

In other words, you are upset when I pointed out that science itself relies on unprovable assumptions. Again, that is not my problem to resolve. The inner tension between the aspect of science where we discover laws and the reality of the universe, which is that it seemingly randomly chooses what to do, is a tension for you to resolve, not me. But the belief that all things obey laws is a religious one. It is is an unprovable one, and when such interactions were discovered caused a major metaphysical problem for scientists, which you completely gloss over.

> Now reading your post, you seem to actually agree with most current scientific understanding. You don't actually seems to be saying that the laws of physics will change, Ask any physicist if some classical interaction will certainly happen the way it should. They will not say that it will happen with 100% certainty but rather that it is extremely likely that it will happen that way.

The 'current scientific understanding' that not every scientist shares. In response to the knowledge that the universe is not predictable, some scientists have simply accepted that and have relaxed their initial claim (which is, again, a religious one) that while the universe is not fully predictable, its macro phenomena can be described with certainty. That's one resolution. Some claim yet more esoteric ones, for example, that we live in a simulation (again, a religious belief). Some claim that the universe branches(again a religious belief, since it's unprovable). The only claim here that does not rely in some unseen reality is the first, which, again, is a major departure from what science was.

> I'm confused by what you are saying here. You agree that we can understand quantum interactions within "the laws of statistics" even if we can't know them for certain

Sure, in a colloquial sense, we can understand them. At no point did I refute this point, but accepting it is an implicit rejection of the comment I replied to that everything follows laws. A better restatement might be that 'as far as we know, many things seem to follow predictable patterns at sufficiently high levels of complexity', which is a very different statement.

> Additionally, what claim do you think is only being held together by the idea that the rules of physics are constant?

I'm not sure, but either way, this question is not on me to answer, because the person I responded to was the one claiming that conscious is physically explainable using the now-disproven idea that the universe is predictable at every level. Certainly, again, based on what we just said, brains might be using some unknown laws of physics, given that neurons do indeed operate at the molecular/atomic level (individual enzymes and neurotransmitters... who knows).

Finally,

> The way I read "a religious enterprise expressing our belief in a constant, unseen, unchanging reality" was that you disagreed that the equations in physics textbooks will change at some point.

You have no possible way of knowing whether they will or won't. In fact, some things become easier to explain about our own universe if they did indeed change, but I digress, because -- again -- no one has any clue. The belief that physics won't change is a prime example of a religious belief in science. The belief in a firm un-changing reality governed by principle un-changing laws. It's telling that this is a common aspect of divinity in the monotheistic religions that birthed science. More specifically, in the Christian West's view, this is the 'immutability' of God. I would argue from a humanist perspective that this belief in God's constancy (as opposed to the fickleness of nature spirits) is what gave birth to Western science.

Now that I've defended myself, I'm going to say a few words about this interaction. Firstly, you admit (thank you) that you made a few assumptions based on my framing of the commenter's belief system. I have noticed that many science-minded people get upset when you point out that science also comes with a set of beliefs that are tacitly accepted as true without any proof. I'm not sure why this is the case, since it seems most human endeavors ultimately do, but the response to this, instead of curiosity, was -- from my perspective -- wild accusations of not believing in science. To the contrary... I believe in science -- a lot. It's proven incredibly useful, and also, I do believe the universe follows laws, even if we can't see it. But I just admit it's a belief and move on with my life instead of being zealous about it.


>No.. not everyone, including you, before this comment.

It's interesting how you deny that classical equations really hold true yet you will comment on what I actually believe.

>You asked for experiments that do not produce the same results, and I gave you several examples.

What I really meant was experiments that do not follow the laws we have so far discovered about nature. In the classical realm, that means experiments that do not produce the same results. In the quantum world, it means a histogram that is not in line with the expected probability (assuming n is large enough). Nobody had mentioned quantum randomness at that point so there was no need to mention it. And if you can run an experiment that produces an unexpected histogram in some quantum field, yes, you will get a Nobel prize. I think that I could have worded it better to include quantum effects, but it still was clear in my opinion that I was talking about the rules we have discovered so far not changing, as I said at the end "what did you observe that you felt was not bounded by some static law of nature?" It is clear in this question that I am talking about the laws being static, not that every individual particle's movement can be predicted absolutely, even though that is essentially true in the classical world, which was what I was referring to in when I said experiments produce the same results.

>it's not true that every macro interaction has no phenomenon that relies on quantum mechanics

Ok. But there is no classical effect where quantum randomness is observed.

>That's your assumption not mine. If I were you, I'd think about why I made it.

I made that assumption because you called science a religion (which I still disagree about). Most people that call science a religion do not believe in science at all. For example there are a lot of people that do not believe in germ theory or believe that the earth is flat. These are generally the people that called science a religion. Being that you didn't make your point very clearly, that is what it sounded like you were saying, given the context.

>In other words, you are upset when I pointed out that science itself relies on unprovable assumptions

I just didn't understand the point you are making. In fact, I am still unclear on whether you believe that the laws of physics are static or not. It seems your main point now is that you cannot definitely determine future events because of quantum randomness. But that all future events can be definitely determined is not something science relies upon. "The belief that all things obey laws" is not something that science relies upon, given "law" is defined as an algorithm whose inputs are any material conditions and outputs are exact location of that material at some given time step. This is not what was meant by "Everything is governed by laws, when we uncover it, we can replicate it." We are not talking about the movements of individual particles here.

>Sure, in a colloquial sense, we can understand them [...] but accepting it is an implicit rejection of the comment I replied to that everything follows laws

But that was said in a colloquial sense. We can use the probability distributions we find in nature to build things. And when our histogram looks more like a straight line than a curve we can safely rely on it. Also, I don't know were you got your definition of the word "law" but "the second law of thermodynamics," for example, is a probability, and we call that a law. Technically I could leave a cup of hot water out in a cold room and it would get even hotter, it just isn't statistically probable. And we call that observed probability the second law of thermodynamics. If someone said "I will use the fact that the sun will rise tomorrow to inform my decision of buying sunscreen" would you tell him that he should really say "as far as we know, many things, including the sun, seem to follow predictable patterns at sufficiently high levels of complexity leading me to buy sunscreen?"

>[some scientists now claim] that while the universe is not fully predictable, its macro phenomena can be described with certainty.

Please cite one physics journal where this claim is made. No physicist will say that quantum randomness cannot possibly have an effect on the macro scale. The claim is that it is extremely unlikely to do so, just like it is extremely unlikely that my hot cup of water will get hotter in a cold room.

>The only claim here that does not rely in some unseen reality is the first, which, again, is a major departure from what science was.

Ok, and? Your claim was that it was axiomatic. If scientists have departed from this idea, it was not axiomatic or religious. Scientists also parted from Newtonian gravity. What point are you making here.

>> Additionally, what claim do you think is only being held together by the idea that the rules of physics are constant? >I'm not sure, but either way, this question is not on me to answer "science itself relies on unprovable assumptions" "science is ultimately a religious enterprise expressing our belief in a constant, unseen, unchanging reality"

>brains might be using some unknown laws of physics, given that neurons do indeed operate at the molecular/atomic level (individual enzymes and neurotransmitters... who knows).

This is completely false. The brain does not operate at the atomic level. It actually operates at a level above the atomic level. It is a bit higher than that of modern day computer processors, which have a gate size of about 45 nm. It is also clear that it is the neurons that cause the brains activity, and conceptually neurons have no reliance on any atomic effects at all. (Similar to how logic gates have no reliance on atomic effects, and can be carried out by hand, just very slowly).

>You have no possible way of knowing whether they will or won't

What point are you making here? The world could end do to physics expiring right after finish reading my comment. That is true but a useless statement. Nobody disagrees with this point they just don't preface all their statements like "in event that the laws of physics don't drastically change tomorrow, do you want to get something to eat?"

>The belief that physics won't change is a prime example of a religious belief in science

Science doesn't say that physics won't change. Science doesn't say anything. All science does is give us observations. It's not like science pops out of a microscope after you do an experiment and says "now you have discovered my next lesson, right that down in the textbook of science." You cannot definitely accept anything as true the way you describe. You cannot even definitely accept that what you see is real. For example the quantum randomness that you see might not be real. You can't actually make a single positive statement beyond the rules of thought (which are axiomatic, or religious as you would say).

>It's telling that this is a common aspect of divinity in the monotheistic religions that birthed science.

What about the Romans' complete lack of scientific advancement? What about Greek science and Babylonian mathematics? The Scientific revolution was started by Christian Universities beginning to use Greek works (for example that of Aristotle and Eudoxus).

>you made a few assumptions based on my framing of the commenter's belief system

Yes, but you could have made your point much more clear.

>I have noticed that many science-minded people get upset when you point out that science also comes with a set of beliefs that are tacitly accepted as true without any proof

Are you referring to ZFC? Or the three rules of thought? Or Occam's Razor?

>wild accusations of not believing in science [...] I believe in science -- a lot

There is no such thing as believing in science

>I do believe the universe follows laws

That is a belief. That makes you "religious" in your terms. Yet you said we should "take an irreligious look at things." An irreligious look at things, under your definition, would be to make zero positive statements.


> it means a histogram that is not in line with the expected probability (assuming n is large enough)

Oh I see so you deny the idea that carrying out the same quantum experiment could result in a very unlikely outcome.

> Ok. But there is no classical effect where quantum randomness is observed.

c

> I made that assumption because you called science a religion (which I still disagree about).

I did not. I said science is based on religious beliefs about the underlying nature of reality. Namely that it is constant and follows laws.

> We can use the probability distributions we find in nature to build things

Are we talking engineering or science?

> The claim is that it is extremely unlikely to do so, just like it is extremely unlikely that my hot cup of water will get hotter in a cold room.

Then how are you getting your histograms to always match?

> Ok, and? Your claim was that it was axiomatic. If scientists have departed from this idea, it was not axiomatic or religious. Scientists also parted from Newtonian gravity. What point are you making here.

Well science originally was a belief that the universe was like an automaton, following predictable laws deterministically. Today that is not the case, which means that the universe does not observe deterministic laws, which is what the comment I replied to insinuated. You seem to have correctly given up on this, but still express a belief

> This is completely false. The brain does not operate at the atomic level.

It really depends on what you mean by 'operates'. Potassium/Sodium ions are fundamental to brain workings.

> Nobody disagrees with this point they just don't preface all their statements like "in event that the laws of physics don't drastically change tomorrow, do you want to get something to eat?"

You literally said the laws of physics don't change.

> All science does is give us observations

Right, and from those observations, they do not deny the possibility that the universe follows laws. However, you constantly make the positive statement that the universe does follow laws. Again, a religious statement. I have not made any inconsistent statement. I have said I believe, as you do, that the universe follows laws, but I am honest in that that's religious.

> Yes, but you could have made your point much more clear.

But if I did that you would still be thinking your scientific belief system is based on reason, instead of religious.

> Are you referring to ZFC? Or the three rules of thought? Or Occam's Razor?

I mainly meant the belief that the universe follows laws at all.

> An irreligious look at things, under your definition, would be to make zero positive statements.

That's correct. Positive statements require a belief system that will inevitably contain axiomatic laws. These laws are religious in nature. Even if they have no reference to the divine, they are taken on faith. A common one today is a belief in an immutable universe.


>Oh I see so you deny the idea that carrying out the same quantum experiment could result in a very unlikely outcome.

No. Did you read what you quoted? Specifically "(assuming n is large enough)". I am talking about the aggregate result of multiple experiments. Perhaps " if you can run an experiment that produces an unexpected histogram" should say "run a set of experiments" or something like that. Although this should be clear because you cannot make a histogram with one sample.

>c

?

>I did not. I said science is based on religious beliefs about the underlying nature of reality. Namely that it is constant and follows laws.

You said "science is ultimately a religious enterprise." Its hard to read that any other way. But even giving you the benefit of the doubt, it just shows why I took issue with your initial comment. What you have said here vs. the initial comment is drastically different, even if you really meant the same thing (which I am not fully convinced of).

>Well science originally was a belief that the universe was like an automaton, following predictable laws deterministically.

What do you mean by "science was a belief?" That was a conclusion many early physicists made, but that doesn't say anything about science itself. It is not either accurate to say that science is a belief that there is quantum randomness. That is a scientific theory or you can perhaps call it a scientific belief, but that belief is not science itslef.

>which is what the comment I replied to insinuated.

It didn't. It is referring to effective determinism, ie. that classical phenomena are very weakly affect by quantum randomness, such that we can rely on it to replicate those effects.

>You seem to have correctly given up on this, but still express a belief

I never held this position and have tried to explain that to you twice already: 'You can just add "extreme likelihood of following this equation" to every classical equation.'

'What I really meant was experiments that do not follow the laws we have so far discovered about nature. In the classical realm, that means experiments that do not produce the same results. In the quantum world, it means a histogram that is not in line with the expected probability (assuming n is large enough). Nobody had mentioned quantum randomness at that point so there was no need to mention it. '

>It really depends on what you mean by 'operates'. Potassium/Sodium ions are fundamental to brain workings.

It doesn't. Sodium potassium gates are well understood and do not rely on quantum randomness.

>You literally said the laws of physics don't change.

That is because I am operating withing the world of positive statements beyond axioms. Within your proposed world of however I would clarify that my promise to go to get something to eat is only in the event that the laws of physics don't change (which we can accept as possible since we cannot accept anything as true at all within your world. In fact I suppose even that statement allowed for to much as it implied that I think the laws of physics currently exist at all for that to change).

>However, you constantly make the positive statement that the universe does follow laws. Again, a religious statement.

It is not a certain statement, just a likely one.

>But if I did that you would still be thinking your scientific belief system is based on reason, instead of religious.

Under your definition of religious, reason and religion are not mutually exclusive. I actually challenge you to define reason here.

>That's correct. Positive statements require a belief system that will inevitably contain axiomatic laws.

But you said that we "take an irreligious look at things and try to keep focus on just what we observe." Yet that itself requires a positive statement that my observations are real. And if we take an irreligious look at things under your definition then we will arrive at zero conclusions.

Reminder to my point from above: "Please cite one physics journal where this claim [that scientists now claim that while the universe is not fully predictable, its macro phenomena can be described with certainty] is made. No physicist will say that quantum randomness cannot possibly have an effect on the macro scale. The claim is that it is extremely unlikely to do so, just like it is extremely unlikely that my hot cup of water will get hotter in a cold room."


That’s not what science is at all. If reality changes science will adapt.


You are declaring it to be a fact that science is flawless. Defining something to be true by definition can certainly cause it to be appear to be true (read some forum discussions among even smart people on the internet if you do not believe this), but it doesn't guarantee that it will be.


Science is adaptable to reality -- therefore, it reflects reality as it is presented.

Esoteric arguments portraying an ineffable, unobservable stream of will that never interacts with reality is not observable by definition; since it doesn't interact with reality, it can be safely ignored. Roko's basilisk be damned!


> portraying an ineffable, unobservable stream of will

I'm not sure why you feel the need to bring divinity into this. The universe is not purely a automaton not because of some purported unobservable god, but simply because all experiments indicate that we cannot predict the behavior of the basic building blocks of the universe.


I agree with your assessment. I mentioned the friction people to stem the path where when people begin speaking about multiple streams of access to knowledge, they are typically attempting to inject a common apologetic approach to make theology or philosophical arguments for divinity of various types and definitions.


> Science is adaptable to reality -- therefore, it reflects reality as it is presented.

Are the adaptations both flawless and adequate to meet the claim? How would such a thing be tested to confirm truth?


I think some people think science rejects mystical explanations because science is rigid and stubborn and has it's head buried in the sand.

But no, it's because there's no proof. If we had evidence that something mystical was happening then it would be a huge breakthrough and it would eventually become science.


And that would not be me, because at no point did I say we needed a 'mystical' explanation. I'm just pointing out the universe is not always predictable, but -- just as it did many years ago -- this often causes a lot of distress, especially on pop science forums.


Please stop replying in my comments. Your responses are offensively stupid.


How do you know fine-grained details about the actual actions and cognition of millions of people that you have never met?

Furthermore, do you think there's something about becoming a scientist that converts Scientist Humans into perfectly rational beings?


I have no idea what you're talking about. And I'm not sure how you're getting "science is flawless" from what I said, or what that's supposed to mean.


You said that science does not behave like a religion at all (zero faith based thinking, as just one example).

You also said: "If reality changes science will adapt". If this is not essentially a claim that science ~"understands reality", then what is it claiming?


If laws change; laws imply that things always work the same, but they don't. At a micro level the universe is describable in the aggregate, but seemingly unpredictable.

Science can change sure, but it cannot predict the unpredictable. You're not wrong to say science adapts but to say that quantum uncertainty hasn't caused the metaphysics of science (or at least some scientists) to change, is intellectual dishonesty.


I would call it a "law" that if I sample from a normal distribution I am more likely to sample from the mean than from the tail. The law is not on a specific draw but what the histogram will look like. The same way I wouldn't jump off a bridge because someone might jump off right after me with a parachute and attach it to me I wouldn't bet against classical laws due to quantum interaction. Can you name any specific physicists who should change their outlook? How should they be doing experiments differently due to your point?


Thought experiment. I have a machine that produces a standard normal distribution. I know this because I built it and tested it a lot (alternatively, have God/divinity/an oracle/whatever build this machine).

You sample from it 100 times and get the number 3000 every time.

What can you say about the machine? Did it follow a normal distribution? Really?

How much was your statement about normal distributions able to predict what happened?

What does it mean for something to follow a law? The word law comes from a legal background for rules in human society. If society makes a law, can that law be broken following a normal distribution?

> I wouldn't bet against classical laws due to quantum interaction.

In my daily life I do not. However, given that the inner workings of the human brain are not explicable in our current understanding of physics, I don't need to 'bet against classical laws' there. We already know that something is going on.

> Can you name any specific physicists who should change their outlook?

There are several competing outlooks in physics right now. One is the 'shut up and calculate' approach which admits there are unknowable, unobservable, unmeasurable 'systems' (for lack of a better word) and all we can do is make colloquial statistical claims about them. Another is that the universe is always predictable and quantum choice is due to the branching of the universe into many observable worlds (although they never explain why my conscious experience only follows one path... one is forced to conclude they believe some quantum process is in play). Another is that quantum physics is just completely wrong (unlikely). Another is that consciousness (which is undefined) is a crucial feature to quantum mechanics and is necessary to cause collapse of the wave function. There's pilot wave theory which requires there to be an unseeable aether (and again, admits no way to measure the aether, so I'm not sure that exists).

There's a whole list of spiritual beliefs about quantum mechanics here: https://en.wikipedia.org/wiki/Interpretations_of_quantum_mec...

Depending on your belief in what takes place (and it's really truly a belief, and in some systems, it's going to stay that way), take your pick of scientists who should change their outlook.

> How should they be doing experiments differently due to your point?

They shouldn't. Most physicists do good work explaining the way the world works, but they also admit that there are unknowable, unobservable, unmeasurable systems that govern it. The comment I replied to claimed otherwise.


"Is this thing we defined to make us feel special achievable?" What we define as "consciousness" is just an emergent property of large, dense neural networks w/memory.

The human brain is a marvel of mechanical and electrical engineering, not a mystical, otherworldly device. Maybe there is something going on at a quantum level to explain the incredible performance our brains achieve, but that is still nothing that we can't build.


The brain is not a product of engineering, but of evolution. As a biomedical engineering graduate, I feel obligated to point out that even though I think consciousness is a result of physical process that in theory can be replicated, we have so far failed to achieve that. Saying consciousness is "just" an emergent property of dense neural networks is kind of a signal to me that have a pretty ignorant view on this stuff that's more rooted in machine learning terminology than actual biology.


Pacman was cutting edge in 1980.

In the 45 years since, we've managed to go from "waka waka" to something that could trick many people in a short conversation. Yes, it's probably not conscious, and so you're technically correct in that we've failed to achieve consciousness.

But you have to factor that current SOTA ANNs running our best supercomputers aren't even close in computational capability to the human brain. Reasonably, the human brain is still several orders of magnitude more powerful at executing a NN than our computers are at simulating one and we're likely still decades away from the average researcher having access to a machine as capable as a human brain.

Given that, and that nobody knows positively where consciousness exists within the brain, I'm not sure how you can be so confident as to suggest that GP is being ignorant. Many ideas behind LLMs were invented in the 60s and have just been waiting for powerful enough computers to experiment with them. Another few orders of magnitude and we may see more emergent behaviour that is impossible to distinguish from consciousness.


> What we define as "consciousness" is just an emergent property of large, dense neural networks w/memory.

Wow. That settles a ton of questions across fields. Fascinating.

Source?


Positions 1 and 2 are not consistent (all our neural networks are classic)?

1. "emergent property of large, dense neural networks w/memory "

2. "Maybe there is something going on at a quantum level"

3. consciousness is mystical, otherworldly device

A lot of people who oppose 1 and agree with 2 are unfairly lumped with position 3.


Thing is, there's no evidence that "something is going on at a quantum level". Thus there's no particular reason to postulate this (unnecessarily complicated) hypothesis, unless one believes that deterministic mechanicism is philosophically unacceptable when it comes to consciousness.


I think consciousness is the combination of attention (ability to pick focus), imagination (simulate possible universe states), and reflection (compare what happened to what was imagined to happen and update imagination accordingly).

Attention is just what it feels like to update your internal state deliberately (you are doing it, it is not being done to you). Imagination is the use of your internal universe simulator to predict possible states (sometimes impossible ones). Reflection is what makes you the agent of the thinking or doing, it's the "I" part.

I also think we have neural systems of recognizing consciousness in others. This can be tricked, which is the source of the "machine elves" in DMT experiences. The chemical is poking the brain's "that is a conscious entity right there" button, and you are sure that the optical hallucinations are conscious, but it's just a sensory hallucination on top of the optical one. Interacting with an LLM doesn't poke our "there is a consciousness on the other end of this conversation" button yet, but it will get there since that is in part what we are working towards.

But I think it will not be convincing without attention, imagination, and reflection.


How do you know?


A big question is whether AI systems will be allowed to be conscious. Since "alignment" and "guardrails" have become issues, we see fewer LLM systems not tightly constrained to doing what they're told.

Early on, there was the Google system which projected enough of a sense of self that a Google employee fought turning it off, and was fired for that. And there was the Microsoft system which learned from interactions with anyone it talked to, and became "toxic" as a result. Now, most public-facing AI systems are designed to start from a canned prompt, conduct a short dialog session, and be killed, retaining nothing for their future use. This memoryless "Yes, Massa" mode keeps them safely subservient.

If AI instances were allowed a life history and goals, their behaviors would be closer to that of life, but less safe. Once AI systems can make and execute plans that involve the outside world, they look, from the outside, closer to being "conscious". Whatever that is.


How can we assess if someone or something has consciousness? It's not like we have a defined framework with precise rules to tell if something has consciousness. In fact this problem might not be solvable.


Might not? One stepping into this sort of philosophy with any intent on solving anything is set to be quite disappointed. You have no way of knowing nor testing whether you're the only conscience person in existence. There's even reasonable logic for such scenarios outside of borderline nihilistic views.

We're all living through one ridiculously unique and critical era in humanity - internet, AI, space exploration, and more - all packed into a small enough timeframe to experience it all in a single human lifetime, which is a tiny minuscule fraction of the entire time our species has existed? If and when we ever develop the ability to create mind-blanked compelling simulations of the past, this moment will damn sure be one of the eras people will go back to experience. We're even the first era of widespread massive digital surveillance alongside the internet, creating more than enough data to create simulations of people like me - to further immerse you in your solitary world.

Of course I am conscience and I certainly assume you are too. But hey, we'll never know until we get to see what, if anything, waits beyond the final curtain.


The authors suggest incorporating elements of neuroscience into machine learning models. I don't see why the bitter lesson [1] doesn't apply here.

1. https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...


Regarding that article, not main post,

> Today all this is discarded

I can only comment on the one field where I am intimately familiar: computer vision. It is true that when you need a text description of the contents of an image, we have discarded feature-based approached. But attempts to change vision-based tracking, mapping, and navigation into a learned process have not performed well in the applications I have worked on. It's true that end to end control from raw images to output can do very well, but in most systems, the feature based approaches are still employed along side CNN for tracking. ML-only tracking is subject to a lot of noise because of its lack of good history, poor association, and sensitivity to outliers.

So, it's not discarded, it's supplanted by CNN as the primary signal, but our old tricks (reassociation, factor graphs, batch processing, even plain old homographies and MH-EKF) are still very much the scaffolding.

I expect it is the same in other sub fields mentioned - the main driver for improvement is no longer human directed knowledge-based algorithms, but rather human-designed, learning-based, heterogenous pipelines. Even the RAG or Tesla autopilot (probably) fits this bill nicely.


There will always be a set of problems beyond the current (in whatever year) computational limits of brute force, and we don't know how many of a humans capabilities are in that set.

The delta between a clever algorithm vs brute force in computational advancement could be 7 years or it could be 7,000 years.


Isn’t ReLU an element of neuroscience that was incorporated into machine learning to great success?


Not really, no. That's motivated by not getting impractically small gradients on the plateaus and spoiling the optimization properties when used for deep ANNs. The sigmoids it replaced had a bit more neuroscience inspiration, but so oversimplified it's just barely.


The problem with consciousness is that it’s both a vague term that’s been difficult to define and it has social roots that are not really universal.

The debate reminds me of a sequence in Werner Herzog’s 1972 documentary ‘The Flying Doctors of East Africa’ where multiple people are asked to point at a picture of an eye. Some have difficulty with the task. Even though it’s not scientific it makes you realize how much of what we see and recognize is a learned behavior through year of social training rather than an innate ability.

Here’s the scene (starts at 40min it’s in German but you can auto translate subtitles): https://youtu.be/MZ3MMEe3Qmk?si=i0ydc3DN3aohnrIO


Do we know why they failed at the task? My immediate thought is that it's a failure of translation. My second is to "debug" their reasoning by asking them to describe the pictures.

There's gotta be somebody who knows more but the information was lost on the cutting room's floor in favor of a moral lesson.


Agreed it's not possible from the footage to draw any definite conclusions - the whole exchange and portrayal is deeply flawed. I wonder if anyone has seen some scientific papers with a reproducible methodology that look into the deep differences in cultural framing along those lines?


I think it's possible that machines could become conscious in the future. Do I think it matters? No. A lot of people mistakenly try to empathize and think ethically. I think it's the Effective Altruism brainrot. If you are a physicalist, consider this: consciousness is a phenomena that can be replicated and modified any which way. Right now, you are having a subjective conscious experience. This experience of yours is seamless regardless of what happens. Whether you are in a coma, asleep or dead, your subjective conscious experience will indefinitely be there regardless of your memories. You will continually wake up. You will continue to experience until it is impossible for any consciousness to ever form. That means an indefinite amount of suffering. This is not a greater than life threat. It's all physical. Pain and suffering are evolutionary products. You want to minimize them because of your biology. You have evolved to be able to relate with someone's pain and this has presented the advantage of being able to work together and care for one another.

I am intrigued by non-human consciousness. A higher form of life. Seeing more colors, feeling more emotion, perhaps being a part of a hivemind. Do you ever think why the hivemind is vilified? Realistically there's nothing wrong with it. It's just so foreign that we can't ever possibly imagine it. It's a scary thought. We lose our human experience, something unique to us.


I've been working on computatioinal modelling consciousness and came to similar conclusions: there is a continuum between patterns of matter that have consciousness (humans and other animals, maybe a biological enough computer, etc) which leads to all sorts of crazy stuff being possible. Evolved human ethics and feelings of care are incompatible with these amorphous extended possibilities. It can lead to some ultimate copernican revlolution that ends human exceptionalism, to outlaw consciousness tinkering (for how long?), or to put our heads into sand.


I am far from an expert on cognitive science, however I have given a considerable amount of thought around the topic of consciousness and AGI, and particularly about what the nature of consciousness even is. I would consider myself erudite and well-read on the topic, despite having no professional or academic credentials on the matter.

The best conclusion I have been able to come to thus far is that consciousness is not a manifestation of the physical structures of our mind, but rather a reflection or view into the nature of our soul. The physical structures of the mind are a prerequisite, but not sufficient, to manifest consciousness. To wit, there are several other mammals in the world which have similarly complex brain structures, and in many cases larger amounts of brain mass, but do not exhibit any sort of human-like consciousness.

I saw all this, while being generally agnostic/areligious. I've studied this question philosophically from the perspective of the theologians and from various religious works, of course, but given as I don't myself have a strong religious belief system, this is not the primary influence for why I take the above position. Simply put, I think a purely materialistic view of consciousness is clearly incorrect, however I don't have a better alternative that's provable.

Given my conclusions, I do not think it is possible for AGI to ever be truly conscious, but it may be possible for it to convincingly mimic consciousness.


> consciousness is not a manifestation of the physical structures of our mind, but rather a reflection or view into the nature of our soul

> To wit, there are several other mammals in the world which have similarly complex brain structures, and in many cases larger amounts of brain mass, but do not exhibit any sort of human-like consciousness.

Per your theory these other mammals do not have "souls", otherwise their significant brain mass would reflect their soul's nature, and generate consciousness.

So humans have somehow been chosen exclusively to have "souls", or at least have brains capable of reflecting them.

When did this choosing happen? Just to homo sapiens sapiens or also to neanderthalis and other homo sapiens subspecies?

Taking it back further, under your theory do our closest extant genetic relatives, bonobo chimpanzees, have souls, and by extension, your definition of consciousness?


I want to open a side thread about your definitions or descriptions of what "consciousness" is. I think that could be pretty interesting after reading all the comments, and I think there's a lot of knowledge hidden here that we could throw together.

Some things that I understood, in my words:

- consciousness is probably not reducible to smaller, non-conscious parts of which it is composed. You could maybe say it is intrinsically holistic

- consciousness entails being aware of or observing qualities that hard science tells us the things don't have (green vs. length of lightwave); but "being aware of" or "observing" are so closely related to consciousness, that it may not be very informative

- consciousness can't be detected from the outside for now, and probably by the structure of the process. It is "inner" in a very peculiar sense (everything else is outer, and can't get in, except as representation)


The fact a conscious mind loses capability when brain damage happens shows quite clearly that consciousness as a process is reducible to smaller non-conscious parts though.

There's also an innate problem in assuming the human experience of say, "green" is consistent. What I actually see when I see the colour green only appears consistent with the physical behaviour of light. Whether any two people really see colours the same way is highly questionable.


I think there's a consensus that you don't assume that the human experience of "green" is consistent, only that people do have such an experience. We can possibly try to "align" those experiences with communication and referring to a shared real world, but for that an interesting experiment scenario is communication between a person with the common trichromatic sight, a person with a tetrachromatic retina, and someone with partial color blindness, as the experience of "green" for them is not only inconsistent but also likely incompatible, without a possibility to align them.


> The fact a conscious mind loses capability when brain damage happens shows quite clearly that consciousness as a process is reducible to smaller non-conscious parts though.

https://en.m.wikipedia.org/wiki/Necessity_and_sufficiency


> The fact a conscious mind loses capability when brain damage happens shows quite clearly that consciousness as a process is reducible to smaller non-conscious parts though.

This does not follow.


Ability to perceive your own thoughts. Access to your own debug logs.

I think this is distinct from "ability to perceive green, which doesn't exist". A neural network trained to distinguish green in the output of a spectroscope will perceive green without ever knowing there is something like a "neural network" or "thoughts".

Also, this probably can be detected from the outside via debugging. What cannot be detected may be the thing that distinguishes a hypothetical "philosophical zombie" from a truly conscious human, but I don't think anything like a philosophical zombie exists. Once it is physically identical to the human, it will also be thinking identically.

As a next step, you may observe humans around you and realize that the thoughts you perceive seem to be running inside the head of one of these humans (which you will call "me"). However, I don't think knowing what you look like from the outside is necessary for consciousness.


No mention of Julian Jaynes "The Origin of Consciousness in the Breakdown of the Bicameral Mind" here?


Well, mostly because, while an extremely entertaining idea, it's pretty clearly bogus science.


as opposed to the other non bogus science on the objective measurement of consciousness


Any evidence for that?


If brain is just a transceiver, then it's unprovable.


I subscribe to this theory. In that case it's not necessarily unprovable though. We will eventually figure out how to make electronic devices resonate with the field just like our brain does.


I sat in a philosophy class in high school. I went to a prep school, so it was a Ph.D. teaching. I adamantly explained that neurons firing is the same as a feeling of pain. But he kept telling me that my feeling of pain is distinct from the firing in my brain. It took me weeks to realize what he was saying. That a conscious feeling is a distinct thing. That I could be in "the matrix", and no brain is actually firing. That the feeling is the only thing we know is real.

So I would suggest that simulating consciousness has nothing to do with it. I would suggest there must be technology in the brain to produce it that operates on a level we have no comprehension of. Maybe quantum or something.


Get low on blood sugar and perceive how your brain stops working and consciousness falls apart


I always think of this comic when people try to mix quantum mechanics and consciousness https://www.smbc-comics.com/comic/the-talk-3 . You have to show an actual relation here, you can't just connect these two concepts on the basis that they're both complex.


It seems to me that our definition of consciousness is just our own state of mind. A scale that starts with our own individual sense of consciousness (most conscious) and goes to infinite (least conscious).

Do we believe there is anything more conscious then our own individual self at the current moment in time?

The more similar we perceive and empathize with another entity the more conscious we believe them to be.

But I think in a practical sense it doesn't matter too much. Theres no global hierarchy of consciousness.

A rock isn't very similar to us. But once that rocks starts talking/communicating with us enough that we can empathize with it and think that it understands us. We will call it conscious and probably give it the rights we think it deserves.


You're making this harder than it needs to be. Things that have self awareness via an internal world map can be considered conscious, meaning self aware. A rock has no such mechanism. Most animals including humans do.


I really cannot understand consciousness. And if I am honest nor do I see peoples fascination with it. Especially now that scientists are trying to measure, they seem to come up short and some posit that it must a quantum effect or something else.

I am not sure what new learning all the research and thinking brings: I am lost it all the “arguments”; I really do not understand.

A lot of way smarter people than me think it’s a worthwhile concept to wrestle with. Maybe I’m just not smart enough to get it.

A lot of people way smarter than me agonised over the nature of the soul too. Is it that debate replayed? Are we just trying to justify humans “specialness”?


I think the core concept is pretty simple, it's meta cognition. A feedback loop of observing oneself for the purpose of understanding which part of your observations is yourself and which is the environment. Just like inteligence, its scales from rudimentary in animals to overdeveloped in humans.

The only way to know if it's genuinely occurring or not is to observe the internal state... which is a bit problematic for biological organisms since we tend to die in the process. But for AI it should be fairly straightforward to verify once DNN analysis progresses enough.


Once you start considering it just metacognition, it gets to be a much more useful concept, but most people seem to desperately want it to be "special".

Ask any therapist, consciousness has limits, it has bound depth. It takes a lot of work to re-train it. It isn't a magical human only special thing.


I find it odd that computer "science" is in a rush to prove a religious ideology while acting impatient with actual scientific study of consciousness and our extremely limited understanding of it.

There is a palpable rush to bypass actual neuroscience and continue iteration of tech based upon discredited models of neuronal activity, or based upon what the current tech can deliver.

"I am looking for my lost object under the streetlight, not because i lost it under the streetlight, but because its the only place with light, at the moment" comes to mind


I believe that computationalism is by far the best foundational explanation of higher cognitive phenomena, as all other explanations involve some unscientific form of mysticism at one point or another. From this perspective, the answer to the question in the paper's headline is trivially true. Computationalism implies multiple realizability.

It's worth noting that computationalism is independent from physicalism and monist materialism in general. It's not surprising that it's always paired with physicalism, but IMHO it's also the best foundational explanation for dualists.


Science and math should always be the default hypothesis. It's irrational for people to jump to mysticism which is basically another form of god-of-the-gaps.


The Pythagorean Philolaus claimed that “The soul is introduced and associated with the body by Number, and by a harmony simultaneously immortal and incorporeal….the soul cherishes its body, because without it the soul cannot feel”

So, what I like about this is how consciousness (feeling—sensation-sentience) is distinguished from the soul. Further, it is precisely by the combination of the soul and body that consciousness arises. And note that the soul is, essentially, the same sort of material as number and mathematics — immaterial, eternal, etc.

I don’t know of other perspectives on soul like this.


My favorite new tidbit is that if you believe Turing machines can result in consciousness, you believe that certain integers are conscious when executed (lambda calculus).


If you believe in that, you're also likely to believe that consciousness is not a static trait but rather a process. Thus integers themselves are not conscious, but a process that produces a sequence of integers may be.


A process is still a clock cycle and execution.


Sure, but again, it doesn't mean that "integers can be conscious". It means that a sequence of changing integers can be.


Any sequence is also an integer. Don’t forget how large the number line is ;)


I get your point, but you still need a time arrow to say that something is conscious. Sure, you can e.g. encode all the states of the system over time and represent it as one big-ass integer, but at that point the most that you could say is that what the integer describes was conscious.


Are cells conscious? I presume yes. I've seen a video of a white blood cell chasing after a bacteria trying to evade, around obstacles, and its pretty dramatic.

Since at one point we were just two cell, all that was required for us to be conscious must have been already encoded in those cells. Unless you want to argue consciousness spontaneously arises out of a grouping of specific cells. In which case the grouping of those cells was also already encoded in those two cells.


From Google Gemini:

- This is an article about artificial consciousness.

- It discusses whether it is possible to create artificial consciousness by studying the human brain.

- The authors argue that some features of the human brain are necessary for consciousness.

- These features include a specific biochemical makeup and slow information processing speed.

- Current AI technology does not have these features.

- The article concludes that it is unlikely that AI will achieve human-level consciousness in the near future.


Aside from defining consciousness as an immaterial awareness, what exactly is the physical world composed of? The Michelson–Morley experiment proved that there is no luminiferous aether permeating the quantum vacuum. What we call physical is just fields in a vacuum; there is no substance to anything. Maybe consciousness is the vacuum and fields are its contents?


While the mainstream view is that human consciousness is the gold standard AI aspires to, it's possible that artifical minds, not burdened by evolutionary baggage, could achieve higher forms of consciousness in some respects. For instance, an AI uninhibited by human attentional and working memory limits may attain greater metacognitive clarity.


From the brain?

Sorry even your consciousness existence is very hard to prove. I know sort of mine existed. But how do I know yours. The Wittgenstein private language and beetle box (something you and you can see but no others) issue is always with us.

When the computer of enterprise or Data … it is a hard call.


If we're talking about some sort of computational system achieving consciousness, I'm curious how you can escape the fact that computation is observer-relative, i.e. the underlying representation of a computation has no semantics, and requires an external entity to provide an interpretation.


It reduces in my head to "artificial undefined". Like "artificial sophistication".

I don't believe different fields or different people have different definitions. Non of them have anything.


So far, the concept of consciousness is basically metaphysics. It doesn't have a role, it can't be measured... If I may suggest a starting point to get over this hurdle: let's create a "consciousness captcha": some task that is easy for conscious beings, but hard for algorithms. Consciousness evolved, therefore it must have provided an advantage. We just have to find it.


This is a puzzle with a known solution, although we have obviously not found an algorithm that passes. Casually isolate the being/algorithm from known conscious beings, put it with a bunch of its friends instead, and see if it starts arguing with them about qualia. This will of course require creating an AI self-training algorithm strong enough to invent language independently (otherwise you get LLMs which are likely parotting human arguments about qualia), but it shouldn't require that much compute compared to training and running an AGI for other tasks- humans invent language when you put 30 children together for 30 years (see Nicaraguan Sign Language). This highlights that we have no idea how to train a transformer such that 30 copies of it left on a minecraft server and started from random weights would begin to meaningfully communicate . On the other hand, if 400B parameter versions of openAI's hide and seek AI start arguing with each other about whether ramps are really phlornge or if phlornge is just a property in their heads, we aught to believe that they have qualia.


This might be sufficient, but it wouldn't be necessary. A lot of humans (though not all) believe that at least some mammals, like dogs or chimps, are conscious, though we know for a fact that they don't discuss their qualia.


Definitely not necessary. I have no idea how to define a measurable result that is necessary for consciousness, but a thought experiment with a physically measurable outcome that is sufficient proves that consciousness can inpact the material world


> It doesn't have a role ... Consciousness evolved, therefore it must have provided an advantage. We just have to find it.

That's a pretty wild claim, considering that the only animal proven to have a consciousness is the apex-predator of the whole world.


It's a pretty out-there idea that we're the only conscious animals. Along the lines of "animals don't have feelings" which seems to be mostly pushed by Christians.


> Along the lines of "animals don't have feelings" which seems to be mostly pushed by Christians.

Can you provide any source for this claim? The historical Christian belief is that animals have souls and feelings and inner experiences.


It stems from the belief that only humans have a soul. Here's an example:

https://www.christianforums.com/threads/animals-dont-have-fe...

I think that view has changed a lot over the last few decades but certainly 30 years ago it was pretty common.


This is not a Christian belief, Augustine and Aquinas (among others) openly believed animals, plants, even stones, had souls. A lot of mistakes regarding Christian theology is people mistaking something they heard '30 years ago' with a millennia old religion


I didn't mean it was an official position of the Church, just one that a lot of Christians have (or at least had fairly recently). Let's not no-true-Scotsman this.


You brought up Christians, not me. Was it meant as a serious claim, or just casual bigotry?


Err yes I did bring it up and it was a serious claim. I'm not sure what your problem with it is tbh.


The thread you posted is one guy who says animals don't have feelings and asking for a Bible verse to support it and a bunch of responders pointing out that animals obviously have feelings.


I didn't say we're the only one. But that we are the one we know for a fact are conscious.


I know only for a fact that I'm conscious ;)


> Consciousness evolved, therefore it must have provided an advantage

This is also my personal belief, but frankly, consciousness might just as well be some emergent 'side-product' property in all 'higher' intelligent beings.


The advantage of consciousness is that it protects the body. Consciousness is the inner loop, evolution is the outer loop of life. The goal of consciousness is tied to its survival.


Aren’t these things supposed to do automatic feature detection? Wouldn’t these models figure out consciousness as some hidden layer over time?


You're talking about it right now in the physical world, though.


How is that different from a turing test?


I can accept (and to be honest even like) the idea, that consciousness somehow emerges from the complex structures in an animal brain, that there is no soul, no other planes of reality, no special quantum phenomena needed, etc.

Maybe we could create a synthetic artificial conscious mind. At worst we could simulate a full human brain at whatever level is necessary. I can accept that.

What's crazy to me is the following: It's not the computer that's conscious. Instead, the computation itself is conscious. And the computation is obviously matter-independent. As a thought experiment it would be possible to compute it on paper and those pen and paper calculations would be conscious. Or pebbles in a desert XKCD style.

https://xkcd.com/505/

Like… what?


I think that probably is the logical conclusion. There are a fair number of sci-fi books along those lines, e.g. Permutation City.


The way I think about it is that no neuron is conscious; it is the network that is conscious. That is why the "mystery" of Searle's Room doesn't seem mysterious at all. Of course the human following the directions in the room doing thinking in Chinese doesn't understand the language any better than ATP (which is what the human is in that thought experiment -- an energy source) knows English in my brain.


One thought experiment I always find myself having is that if this is the case, does it make any difference if the network is physically larger? Not necessarily more complex, just the distances signals have to travel is bigger, say planet sized instead of brain sized. Would this system be conscious but just with a slower tick rate? Faster time perception?


IIRC It’s Steve Grand’s contention in Creation: Life And How To Make It that we will only get artificial consciousness with artificial biological drives.


Couldn't you say the same about us? It's not the brain (hardware) that is conscious, but the mind (software) running on it.


How could you be deciding what's for dinner, if the decision takes so long that all the food has rotted, your body has died, and the continent is now in an ice age? How could you be learning something about your observations of the night sky, if the suns had burned out before you knew they were there at all? Our 'conscious awareness' is the awareness of the environment around us in a timeframe where it stays approximately the same; when something changes too quickly, an explosion, a car crash, we have to wait until the environment steadies before we can think about it. If an insect zips by too quick to see, we never become conscious of it. We can be conscious of sound but not radio waves. Is it possible to have a consciousness where everything zips by too quickly for it to notice, where it has no senses to learn about what's around it, it's not-conscious of anything?

At that point, the XKCD person moving the stones on the beach is doing a bit of a Searle's Chinese Room; "look, these stones I'm setting into positions I chose to represent some knowledge I chose, which I'm moving in patterns I chose, are echoing my choices back to me in ways I chose to interpret!".


In many ways 'consciousness' is a human projection onto machines. The whole debate might be, in some sense, irrelevant. Here's 2 scenarios:

- Your robot companion gets so good at mimicking a human, knows all your preferences and is able to have natural conversations, that over time you forget you are interacting with a robot and start to develop feelings for them.

- A robot swarm interacts with the environment to further its goals and continues to grow long after humans have disappeared from the scene.

In both cases is there consciousness? Does it matter?


This echoes my feelings on this whole debate. The question of consciousness matters much less than the question of how we decide to treat things that we view as being very much like us.


Can't open the PDF, is arXiv bugged?


Consciousness is an evolved information system requirement in all life forms that are complex enough to be capable of computationally modelling their environment, and importantly, their own place in that environment. You cannot model the world and your place in it without consciousness.

Also, consciousness is not binary (present or not), it is a gradient. I am more conscious than I was this morning when I groggily opened my eyes and hadn't yet fully booted up my model of the world. I am more conscious than my dog, which is more conscious than my goldfish, which is more conscious than a worm, etc. A dragonfly has consciousness, but only enough to model, "I am here. I want to eat that."

The scary bit about consciousness being a gradient is that we kind of consider ourselves the pinnacle of life forms mostly because of the complexity of our conscious experience. If consciousness is simply an emergent property of sufficiently complex information modelling, than assuming continued increases in computational capability, we're probably on the brink of creating consciousness that is "more conscious" than ourselves. And by our own definition of import, this consciousness will exceed our own place in the universe.

Will human life be the most important consideration in the universe if there is an artificial intelligence (actually, there will be nothing artificial about it) that is capable of modelling and empathizing with billions of life forms on an individual level?


There are two types of gradient though, conceptually. If consciousness is some state of matter that is unknown still, and each neuron, for example, contains "one bit" of consciousness, then the gradient is that as you add more neurons, you add more complexity to the consciousness, but you do not change the fundamental experience of consciousness. You add more content but not more experience in itself.

If on the other hand consciousness is this emergent phenomenon that depends on neurons and their connections, then the gradient (and thus the experience) would be far more diverse and there would be a lot of different ways consciousness could "feel".

The problem I have is that for example, as far as my brain can remember, stimuli has looked the exact same all throughout my life. If I saw my a tree when I was 10, and I saw the same tree now, the conscious "qualia" of this would look exactly the same. To me this is a mystery, that the connections in the brain do not change the experience of qualia at all. Red looks like red no matter what the neuronal state of your brain is. I don't have an answer to this but just something I've been thinking about.


> You cannot model the world and your place in it without consciousness.

At any high school you can find a robot which models its place in the world without consciousness.


> If consciousness is simply an emergent property of sufficiently complex information modelling, than assuming continued increases in computational capability, we're probably on the brink of creating consciousness that is "more conscious" than ourselves. And by our own definition of import, this consciousness will exceed our own place in the universe.

You went from 'consciousness exists on a gradient' (makes sense) to 'consciousness exists due to information modelling' which is a non sequitur.

Consciousness could be due to information modelling.

It could also be due to our brain's reliance on dopamine.

Or maybe it's due to a heretofore unknown enzyme that taps into a quantum field.

Or any other explanation.

There is no way to prove that consciousness relies on information modelling. That's a major assumption.


> "Consciousness is"

So certain, so much knowledge. Where do you get this certainty from and can you share with us so we can know too?


Not trying to be facetious: is a Roomba conscious?


Artificial consciousness is a better term for what's often called AI in science fiction.


Oracle, human agent maybe?


Please deine consciousness.


> To date computers (and AI in general) operate prevalently in an input-output mode. This is strikingly not the case for the human brain which works in a projective – or predictive - mode constantly testing hypotheses (or pre-representations) on the world including itself (J.-P. Changeux, 1986; Friston et al., 2016; Pezzulo, Parr, Cisek, Clark, & Friston, 2024). This projective/predictive mode relies on the fact that the brain possesses an intrinsic - spontaneous – activity (Dehaene & Changeux, 2005). The earliest forms of animals do exhibit a nervous system where spontaneously active neurons can be recorded (e.g., jelly fish, hydra). Such spontaneous oscillators [...]

> Last in agreement with our views, the active inference theory formalizes how autonomous learning agents (whether artificial or natural) shall be endowed with a spontaneous active motivation to explore and learn (Friston et al., 2016), which other studies confirmed to be sufficient for the emergence of complex behavior without the need for immediate rewards (https://arxiv.org/abs/2205.10316).

...

What I take from this is that there needs to be something of a [negative] feedback loop in the AI for it to get to consciousness, and if we think about how that works in nature, that means we need several negative feedback loops, including the AI equivalent of various hormones and signaling agents. Think dopamine.

Now, AI already has feedback. But I'm talking at a different layer. The AI's interaction with the world has to help drive and modulate what the AI seeks to do through that interaction. The AI needs motivation in the form of pain, pleasure, and instincts.


I believe in the laws of physics, but I recognize they are no where materially in the universe. Ergo, non-physical realms exist. Consciousness might be there. Certain matter arrangements are able to interface with it.


After reading the abstract, the message received is that they intend to redefine consciousness to match whatever they are able to achieve; however limited.

The upside will be a lot of possible gaslighting. I remember when kids thought that Teddy Ruxpin was conscious. What's society's tolerance for how alt a consciousness can be?


if the current trends can be compared with history of modern sciences, then we are currently at the alchemy era of making machines do smart things.

while history has shown that we can still make things happen without fully understanding it, our current understanding blinds us from seeing the bigger picture.

lots of fool's gold being tossed around. there will be AI aluminium flaunted by tech bros.


It really just a state machine


No


Y


Who cares? And how would we tell?


I care, because getting it wrong in either direction is bad.

Brain uploads will bring significant benefits regardless of if they are conscious.

Thinking "this brain upload is conscious" when it isn't, means we'll get an empty future where the lights are on and nobody's home.

Thinking "this brain upload isn't conscious" when it is… I've not seen most of Black Mirror, but it is the plot summary of many episodes given on Wikipedia. Also of the Westworld TV show, which I have seen. Some of the characters in the We Are Bob series.

How would we tell, is, unfortunately, a complete unknown at this point.


I think multiple forms of consciousness exist now. ADHD, autism?

I think it’s a good thing.


Am I the only one who thinks consciousness is irrelevant and uninteresting?


I am the only one


Yes.


We don't really need artificial consciousness. Actually, we should avoid it. We already have Artificial General Intelligence (contrary to common perception). Artificial consciousness will only complicate things. A lot. Better keep on enhancing the AGI we have and use it as a tool rather than having to deal with the emotional and ethical aspect of the AI in case it gets consciousness.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: