1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.
2. The human brain is full of causal cycles at all scales. Even if you think human reasoning is axiomatisable, it's not at all obvious to me that the set of axioms would be finite or even computable. Again this rules out any application of Gödel's theorem.
3. Penrose's argument revolves around the fact that the sentence encoding "true but not provable" in Gödel's argument is actually provably true in the outer logical system being used to prove Gödel's theorem, just not the inner logical system being studied. But as all logicians know, truth is a slippery concept and is itself internally indefinable (Tarski's theorem), so there's no guarantee that this notion of "truth" used in the outer system is the same as the "real" truth predicate of the inner system (at best it's something like an arbitrary choice, dependent on your encoding). Penrose is referring to "truth" at multiple logical levels and conflating them.
In other words: you can't selectively chose to apply Gödel's theorem to the situation but not any of the other results of mathematical logic.
Perhaps I misunderstand, but your criticism seems to me rather to agree with Penrose's argument. Points (1) and (2) argue that human reasoning is not computational, in the sense relevant for Gödel's theorems. That is exactly Penrose's point (and was Gödel's point too). He's arguing that we are not going to get a fundamentally super-computational system thru a stack of computations, no matter how big your stack gets. You seem to be confirming this.
I don't understand how you mean (3) to apply as a criticism at all. He is not making a claim about truth at some level, he's just reminding us what computation is and what its limits are.
afaiu, there are two ways to counter Penrose's claim:
a. Prove that consciousness is actually just computational.
b. Prove that merely stacking computations can somehow produce a super-computational system.
I understood that part of Penrose's argument to run as follows (which I got from his book the emperor's new mind, I admit I actually skimmed the interview).
1. Assume for contradiction that human reasoning is formally describable, as an algorithm or as a logical structure.
2. As a result, the incompleteness theorem produces a sentence that is true in that formal system but which human reasoning cannot detect (by virtue of being described by this formal system).
3. But the proof of the incompleteness theorem shows that this sentence is true, and it was produced by a human, a contradiction.
I don't necessarily disagree with the conclusion (I'm kinda agnostic at this point), I just think that this particular argument doesn't hold water.
Yeah if anything in this universe is undecidable, it's gotta be the conscious thought and behavior of a sufficiently complex brain.
Even if a full system analysis reveals patterns that can be used for short-term predictions, you're still only looking at inputs and outputs, and run into the system identification problem. Two brains could produce the same measurable output while still experiencing wildly different internal phenomenon and qualia. For example, how would I know that my blue is your blue, even if we both answer blue and our visual cortex exhibits similar firing patterns?
If you and I had the same exact input and outputs then why would we have different experiences? Seems very unlikely to me.
A better way of thinking about qualia is like embeddings in a neural network. Every time you run the training from a random initialization you will get different resulting embedding, but given the same training data all embeddings will all essentially be equivalent under rotation.
I.e., your internal representation of blue might be very different from mine in an absolute sense but our relation between the representation of different colors will be roughly the same.
> If you and I had the same exact input and outputs then why would we have different experiences? Seems very unlikely to me.
The point is that you may not be able to prove that for a sufficiently connected system.
> A better way of thinking about qualia is like embeddings in a neural network.
I don't think this is a good analogy. Even if two brains might have the same input and output pairs, you don't necessarily know that they had the same experiences. You also don't know that initial epigenetic and prenatal conditions haven't deviated from the two brains. It would be extremely hard to control for this in a lab, and certainly you won't encounter such similarities in the wild.
> your internal representation of blue might be very different from mine in an absolute sense but our relation between the representation of different colors will be roughly the same.
I think you misunderstand qualia, as this is the exact crux of the argument. Just because we can agree to relational congruencies doesn't mean we have the same individual internal experiences. And we can't just handwave this away as "not important" or "roughly the same". In any case, your argument is self-defeating, as any deviation in internal experience validates my original claim.
> If you and I had the same exact input and outputs then why would we have different experiences? Seems very unlikely to me.
On the contrary, I would say it's quite unlikely that two brains with the same input and outputs will ever have the same experiences in deriving the output from the input. But neither of us (nor anyone in the world) know how human brains work, so it is probably not useful debating about this.
Your (3) is beautifully said... and to prove the point, we are perfectly able to make computational systems that can manipulate symbols in the same way that Gödel did to verify the incompleteness theorem. Humans and computers are both able to do work within either logical system, and incapable of doing work that crosses between them.
Everything that makes "truth" slippery makes "intelligence" and "consciousness" even more slippery and subjective. This is why AGI has such negative impact on AI discourse -- the cause only advances when we focus on improving at measurable tasks.
Indeed, proof-checking tools like Lean can reason about their own logical systems and prove their own incompleteness, but I doubt Penrose would conclude that they are not formal systems as a result.
I like to think people can still make progress on questions of intelligence and consciousness though. Michael Levin's work comes to mind, for instance. Science is just at very early stages of understanding =)
> it's not at all obvious to me that the set of axioms would be finite or even computable
The reasoning is representable with and by a finite number of elementary physical particles and so must itself be finite. Because it is finite it is computable.
Said another way, you would need an infinitely large brain (or an infinitely deep one) to create infinite reasoning.
True but not relevant. In this case "it" is the number of states of a finite volume that we believe to be fundamentally quantised.
Borealid isn't saying that any finite output is computable, but that outputs of this specific thing is computable because as far as we know it has a finite number of states.
This implies that brains can't compute the general nth BB function which is also true as far as we know.
I don't think this argument holds water for a number of reasons:
1. It is an unknown whether a finite volume of space can fundamentally be described by a finite number of states. You can extrapolate to this situation from your favourite theory, but this is not evidence that reality actually works like that. Physics has a long way to go to understand space-time completely.
2. Even assuming that is true, brains are not isolated systems. They are entangled with their environment. Why are you so sure that human cognition can be neatly separated into a finite box like this? The reality is almost certainly more complicated.
3. Lastly, you cannot measure a system like the brain to fundamental levels of detail without destroying it. You literally cannot clone a brain state if you take modern physics seriously, so this whole thing is a non-starter anyway.
You are right that we don't have a theory of everything. However, we can go pretty far with what we have and some clever reasoning. Eg when you have two different gases, like hydrogen and helium, in separate containers and mix them, you can build a relatively simple engine to extract work from that mixing.
That engine also works when your gasses are almost but not quite the same, eg when you have deuterium and hydrogen.
But it doesn't work, when you have the same gas, like hydrogen in both sides.
That gives a pretty strong hint that hydrogen atoms 'have no hair', ie they are all the same.
Fair point on 1, although I personally view that as extrapolation from an incomplete theory. But perhaps I have not read enough of the evidence for that bound.
I think you underestimate the complexity of systems with more than a handful of interacting particles. Yes, you can extrapolate with what we have but I think the answers you get are likely to be totally wrong outside of a very controlled regime. People simulating large scale interaction have to make a lot of simplifying assumptions, which need to be tuned to the problem at hand to be effective. Non-equilibrium statistical mechanics is poorly understood.
They make a lot of simplifying assumptions, because they don't just care about computability, but about finishing runs of their simulations before the heatdeath of the universe.
Our discussion was about the computability only, wasn't it?
The Busy Beaver deals with two kinds of programs: those that can use infinite amounts of time and space, and those that only use finite amounts of time (and thus also only finite amounts of space).
As far as I can tell, there's no place in the Busy Beaver ever, where space is finite but time is infinite.
And in any case, if your space is finite, you don't need infinite time: you 'trivially' can detect that you are reaching a tape state that you have reached before and abort. The busy beaver is harder than that.
The Busy Beaver Game is not a single Busy Beaver, but the game where the objective is to locate the longest-running program with a given program size. What makes it non-computible is largely the fact that how long the program can run for is not known in advance.
If there is a cap on both the program size and the execution of any individual Busy Beaver, then it becomes computable by the trivial expedient of generating every single possible turing machine of the target size, executing each (stopping at the time limit), then returning the one that ran for the greatest number of steps and terminated.
In other words, the Busy Beaver Game is noncomputable because of the Halting Problem, and the Halting Problem is noncomputable because it lacks an upper bound in time.
Incidentally, bounding time also bounds output space because at each time unit at most one unit of output may be written.
> In other words, the Busy Beaver Game is noncomputable because of the Halting Problem, and the Halting Problem is noncomputable because it lacks an upper bound in time.
Not completely. The halting problem is computable for some more limited forms of computation, even if they have no upper bound in time.
In an alternative formulation: for some Turing machines we can prove in finite time that they run forever. Thus your 'lack of an upper bound in time' is not enough.
I believe there exist some subset of turing machines that can be proven to run forever.
But providing an oracle for the halting problem - of which an upper bound is one form - makes the Busy Beaver Game computable.
To get back to the original topic... a finite brain in finite time can only produce finite output. Both the brain and its output are, like all fully bounded sets, computable.
Your last point is correct, though: if you can detect a "cycle" (like the one the Turing machine I linked to goes through) then you can conclude that the machine won't halt.
Yes, it's trivial to write a Turing machine that only uses finite space but infinite time.
But as you agree, they are of no concern for the computability of Busy Beaver numbers. (However, they are of major concern for people who want to find Busy Beaver numbers in practice.)
The Busy Bever numbers may be finite, but the machine (specifically its tape) that produces them is not. If the Busy Bever is running on a Turing machine with a finite tape length the number becomes computable.
Turning it around, the answer to "can a machine of infinite size do things a finite computer can't" is "yes". That answer ends up being the reason many things aren't computable, including the halting problem.
The halting problem is a trick in disguise. The trick is: no one said the program you are checking halts had to have finite code, or finite storage. Once you see the trick the halting problem looses a lot of its mystique.
I'm not sure what you mean here - the Turing machine that represents a particular BB number halts by definition, which means that it can only visit a finite segment of the tape. Nevertheless BB numbers are incomputable in general.
On your second point - allowing infinitely many steps of computation lets you solve the halting problem for regular Turing machines, but you still get an infinitary version of the halting problem that's incomputable (same proof more or less). So I don't think that's really the issue at stake.
The difficulty of the problem is not determined by the answer. The Busy Bever's case the answer could be the number or the number of states in its code. Yes, they are both finite, but that doesn't matter.
The difficulty of a problem (which is the time required to find it) is determined by how many possibilities you have to explore. Since Busy Bever is defined in terms of the number of 1's on a tape when it halts, a "possibility" is how many arrangements of 1's a Turing tape can support. As a Turing tape is infinitely long, the answer is it supports an infinite number of arrangements of 1's.
This all follows from the definition of "not computable". We say a problem isn't computable if we can't find the the answer with a finite sized program, using finite space. It's not an unreasonable definition. How else could you define it?
That definition does leave out the number of steps needed to find the solution. The OP's assertion above is that if the both the program and the space is finite, then the number of steps must also be finite or it must loop forever. I'll leave looking up why as a exercise for the reader. (The proof is pretty simple. It's only a few lines long.) That means in a finite system there is no "halting problem" because looping is easy enough to detect when it happens, and you will either eventually see the program loop or halt because there are no other possible outcomes.
"Non-computable" therefore means "oops we hit an infinity". Twisting that around, if we decide something isn't computable an infinity must have snuck into the problem somehow. All the proofs you see demonstrating something is non-computable on a Turing machine happen because the infinite thing sneaking in is it's tape. Restrict the Turing tape to being finite, and every problem that can be solved on it is computable.
If you want to see how this works in a practical sense, consider BusyBever(6). It's possible we will never solve it because to solve it you need to solve the Collatz Conjecture. The conjecture is simple: if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. It's easy to disprove: all you have to do is find a counter example. None has been found of course, but that doesn't mean much because there are infinite numbers to check and infinite means non-computable. But what if we remove the infinity? Lets just insist x < N, where N is an integer. Then the Collatz Conjecture becomes solvable, and if BusyBever(6) doesn't contain another such puzzle it becomes solvable too.
(appreciate the background, but fyi I'm very familiar with the theory of computation)
I may have misunderstood what you meant when I replied.
I agree with basically everything you have written here in spirit, but I don't think "oops an infinity snuck in" is really a useful way to think about computability. There are lots of infinities that can be tamed by Turing machines, by a program that computes membership, for instance. The even numbers are a trivial example. The infinities of computation are asymptotic, not absolute - a Turing machine that solves a decision problem may use unbounded space as a function of input size, but this is not the same thing as literally using an infinite amount of space.
Restricting to finite tapes is pointless from a mathematical point of view, precisely because everything becomes computable in finite time/space.
> There are lots of infinities that can be tamed by Turing machines, by a program that computes membership, for instance.
To someone (which I assume covers most here) who are familiar with mid-level high school maths (calculus, infinite series) this is obvious, surely. We don't just tame infinities in maths and physics, we put them to work. You not only have to be aware of them, you have use them to get some important results. I doubt you would find someone willing to argue against this in mathematics.
The same statement is true in computing. You have to be aware of unwanted infinities. I'd lay long odds there are more programmers out there who have created unwanted infinite loops than there are maths students who have unwittingly divided by 0 and got a nonsensical result. Yet at the same time, infinities lie at the heart of some important results - like the halting problem. Everyone should be aware they are both a nuisance you don't want sneaking in unnoticed, and a very valuable tool.
> I agree with basically everything you have written here in spirit, but I don't think "oops an infinity snuck in" is really a useful way to think about computability.
I disagree. The halting program is one specific result. It is merely the outcome of a more fundamental idea. To wit, there are infinite states (think: infinite strings of bits) no finite program can describe (or equivalently reproduce, or recognise). It's one of those rare things that is hard to get your head around when you first hear it, then becomes obvious after a bit of thought. Somewhat less obvious, to me anyway, is the result applies even if you give that finite program infinite time and infinite space.
Like most fundamental ideas, it leads to other results with little effort. For example, if you view the finite program as a compressed version of the infinite space, it means some infinite spaces can not be compressed. This result is doubly interesting because it extends to finite systems, leading to linking of the seemingly disparate ideas of randomness, compression and computability. It also means if the universe has some infinity hiding inside it it's possible our finite minds can never understand it. It means you will never be able to write a finite program that proves some infinitely long programs halt. It almost certainly means there is an N above which BB(N) isn't computable, and from what we know now that N may be 6. It is why Gödel's theorem holds. Most of those things aren't near as easily deduced from the halting problem, which is why it isn't as useful understanding the effects of infinities on computability.
Above I think you are saying "but, but, introducing the idea that some infinities can be described might confuse some programmers. It may lead them to think a finite program can't describe an infinite series of 1's, for example". I'm not sure how many programmers you know, because in the world I live in not a single one would be confused by that. All are perfectly capable understanding some infinities can be described (not in the least because they've probably written a few by mistake) and this new idea some infinities can't be written down.
I don't know why you think I'm worried about confusing programmers? Did I say that?
My claim is that all of the results you are attributing to "infinity" are true for other reasons, like diagonalisation, and cannot be proven directly from the concept of infinity like you are claiming. It's a heuristic that is not helpful for actually doing the mathematics.
Feel free to prove me wrong by giving me a (mathematical, not hand waving) proof of the halting theorem that only makes use of "infinity" as you are describing it. You won't be able to do it, because it's not the crux of the halting problem. It holds for infinite programs too, because diagonalization arguments don't care how big your set is.
Anyway, I don't really have the energy for this. I appreciate the time and discussion, for which I thank you. But as someone who studied maths at a PhD level and now programs for a living, I'm not getting much out of these ideas. Perhaps we can just agree to disagree for now.
> This all follows from the definition of "not computable". We say a problem isn't computable if we can't find the the answer with a finite sized program, using finite space. It's not an unreasonable definition. How else could you define it?
Btw, your reasoning is not really valid. There's plenty of Turing machines that print out lots of 1s (either finite or infinite), where we can accurately predict in finite time how many it will print out. (To fill out the details: crucially the prediction runs much faster than the original machine, and you can assume that we give that answer in eg binary numbers, so we can write it down more compactly than in unary).
> That definition does leave out the number of steps needed to find the solution. The OP's assertion above is that if the both the program and the space is finite, then the number of steps must also be finite or it must loop forever. I'll leave looking up why as a exercise for the reader. (The proof is pretty simple. It's only a few lines long.) That means in a finite system there is no "halting problem" because looping is easy enough to detect when it happens, and you will either eventually see the program loop or halt because there are no other possible outcomes.
Please be very careful about distinguishing a system with a fixed upper limit from one that can be arbitrarily big but finite.
Also be careful, the opposite of your assertion doesn't hold: even an arbitrarily extendable tape doesn't mean that the halting problem must be a problem.
As a last caveat: just because the definition of your problem uses infinity (or the limit of an infinite process, etc) doesn't mean that infinity is inherent to what you are defining. Eg 1 + 1/2 + 1/4 + 1/8 + ... is the sum of an infinite series, but you might be aware that we can calculate the result with a finite computation.
Similarly, the classic definition of Busy Beaver numbers mentions infinity, but that doesn't automatically mean all means of arriving at these numbers have to involve infinite processes.
> [...] infinite means non-computable.
No. Not at all. You argued that non-computable implies an infinity somewhere, and I can grant that. But the reverse is not true. There's lots of variants of that Collatz conjecture with slightly different rules that also cover all numbers, and that we can definitely prove or disprove.
Or more trivially: take any old mathematical statement over all natural numbers. Many of them can be proven true or false, despite the infinity.
> There's plenty of Turing machines that print out lots of 1s (either finite or infinite), where we can accurately predict in finite time how many it will print out.
The reasoning is not-computable implies an infinity. As you know "implies" is not reflexive, so no I wasn't intending to say the reverse.
> > [...] infinite means non-computable.
You didn't quote the full sentence. It was "because there are infinite numbers to check and infinite means non-computable". Being forced to check an infinite number of possibilities is one definition of non-computable, as you acknowledged.
> The reasoning is not-computable implies an infinity. As you know "implies" is not reflexive, so no I wasn't intending to say the reverse.
OK, but you implied you were giving a definition? Definitions are usually two sided, where every 'if' is silently implied to be an 'iff' for convenience.
> You didn't quote the full sentence. It was "because there are infinite numbers to check and infinite means non-computable". Being forced to check an infinite number of possibilities is one definition of non-computable, as you acknowledged.
Not really. You'd need to prove that you actually need to check the numbers. We need to exclude that we just lacked the right idea.
Eg it fairly easy to prove that there's an infinite amount of prime numbers without having to check all numbers. It was a lot harder to prove Fermat's Last Theorem without checking all the numbers. For the Collatz conjucture, we don't know.
Why I did acknowledge was that _if_ you can prove an upper bound, then checking all numbers is computable, yes.
But if we haven't found an upper bound (yet), we have no clue whether the problem is computable or not.
I mean, any given Turing machine has finite code. But with even a very small number of states you start getting into crazy territory.
We’ve constructed Turing machines with fewer than 800 states that halt if and only if ZFC is inconsistent. Which means that even given infinite time and space, we still couldn’t find BB(800) without fundamentally changing our system of mathematics.
Is it only neurons? Is it not also the forces acting on the brain matter, chemical/physical and pre-conditionally? I suspect any brain state is practically created from infinite complexity, unless there was some way to simulate the local light cone and age a model appropriately.
Even according to platonism souls are forms and forms are described by science. The only difference is that forms are substance and can exist detached from matter, but this doesn't affect computability. Moreover, mathematics is epitome of eidos.
The commenter's point was to disagree with the previous comment that "souls aren't real". Lack of evidence either way means we don't know. Occam's razor, while a good heuristic, is a heuristic, not a theorem.
I think that doesn't work, because we don't know how to represent and predict the state of a cloud of elementary particles to that level of detail. You could argue that the mathematics proves that this is possible in principle, but I counter that you have no idea whether the theory extrapolates to such situations in real life because it is way out of humanity's compute budget to test. Like the rest of physics, I expect new regimes would come with new phenomena that we don't understand.
> 1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.
Human reasoning is certainly limited. I mean, imagine the kinds of formulas one would get applying GT to the brain. They would be so enormous they'd be entirely impenetrable to human reasoning. We couldn't even read them in a single lifetime.
As for proving GT itself, this proof has been formalized. There's no reason a computer couldn't prove GT by itself.
Just a word on 2, I think that the axioms have to be finite right? (Given that they exist at all) Nothing in the physical universe can require an infinite description.
Well, technically for Gödel's theorem to apply to a formal system it has to satisfy a few properties. One of those is that it has a computable list of axioms. Infinite sets can be computable too - the only requirement is that determining whether an axiom is contained in the set can be performed by an algorithm.
For instance the set of even numbers is infinite but computable, just check whether the number is divisible by 2.
The algorithm itself is finite, even if the set it determines is not.
That's not quite the incompleteness theorem, which is about sentences that are unprovable but nevertheless have a truth value.
I think the liar's paradox is of a different kind. It's a sentence that looks well-formed but arguably has no truth value. If you were to formalise human reasoning as a logical system, such sentences would not be definable in it.
Either way, for Penrose's argument to work you actually need the proof of Gödel's theorem to hold, not just the result.
1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.
2. The human brain is full of causal cycles at all scales. Even if you think human reasoning is axiomatisable, it's not at all obvious to me that the set of axioms would be finite or even computable. Again this rules out any application of Gödel's theorem.
3. Penrose's argument revolves around the fact that the sentence encoding "true but not provable" in Gödel's argument is actually provably true in the outer logical system being used to prove Gödel's theorem, just not the inner logical system being studied. But as all logicians know, truth is a slippery concept and is itself internally indefinable (Tarski's theorem), so there's no guarantee that this notion of "truth" used in the outer system is the same as the "real" truth predicate of the inner system (at best it's something like an arbitrary choice, dependent on your encoding). Penrose is referring to "truth" at multiple logical levels and conflating them.
In other words: you can't selectively chose to apply Gödel's theorem to the situation but not any of the other results of mathematical logic.