Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> leveraging the Halting Problem gives you a lot of stuff for free.

Not exactly for free. You still have to prove the non-computability of the halting problem. But I get your meaning.

> Just the fact that we're assuming that a formal logic system and a programming language are isomorphic is actually a pretty huge deal and those developments span several generations of logicians.

Sure, but that is no different from any technological advance. Arabic numerals were a major breakthrough when they were first invented. Nowadays it is pretty safe to take them for granted in most cases.

> Godel numbering really originally made this entire thing work.

Sure, but that was only necessary because Godel didn't know about ASCII. Proving Godel's theorem using Godel numbers is kind of like writing a C compiler as a state table for a Turing Machine. You could do it, but no one in their right mind would except perhaps as an intellectual exercise.

The history of how technological innovations came about is very different, and usually much more difficult, than the way one can re-invent those innovations with the benefit of hindsight.



Gödel numbering is chosen because numbers are very easy to reason about and mathematicians like that things are provable from simple principles like arithmetic, and because for Gödel's theorem to actually work on theories of arithmetic the encodings of formulas have to be numbers.

Of course, you can choose some other encoding scheme - one that is easier for humans to read, such as the binary encoding of a nicely formatted string - but that's not as directly amenable to a rigorous mathematical proof. In particular, you technically have to prove that that encoding scheme doesn't lead to ambiguities, i.e. every string has (at most) a unique parsing. That may seem (and is) obvious, but technically does require a proof, something you can avoid when you just use Gödel numbering via the fundamental theorem of arithmetic instead

But Gödel numbering is not really the bread and butter of the proof. The more technically relevant part about Gödel's proofs that doesn't just readily follow from the halting problem is the fact that incompleteness of first-order theories has only very weak conditions - Robinson arithmetic is already sufficient.

You have to prove that Robinson arithmetic is strong enough to express every primitive recursive function and that requires some mathematical tricks.


> because for Gödel's theorem to actually work on theories of arithmetic the encodings of formulas have to be numbers

ASCII codes are numbers.

> but that's not as directly amenable to a rigorous mathematical proof

Why not?

> you technically have to prove that that encoding scheme doesn't lead to ambiguities,

ASCII is unambiguous. In fact, the only difference between ASCII and Godel numbers is that ASCII encodes strings as powers of 256 whereas Godel numbers encode strings as powers of prime numbers.

> i.e. every string has (at most) a unique parsing

Huh? That makes no sense. Did you mean that every number has a unique mapping to a string? Because ASCII certainly meets that requirement.

> The more technically relevant part about Gödel's proofs that doesn't just readily follow from the halting problem is the fact that incompleteness of first-order theories has only very weak conditions - Robinson arithmetic is already sufficient.

Why does that matter? This seems to me like an interesting bit of trivia about Robinson arithmetic, but I don't see why this matters to the main point. Any system less powerful than PA is trivially incomplete.


Your point about ASCII codes ignores that we don't just want individual symbols to be numbers, we want entire formulas to be numbers too.

Of course, you can encode any sequence of ASCII codes as a number (just concatenate the binary digits), I didn't doubt that. But to formally prove that this leads to unique parsing requires unnecessary technical work that mathematicians prefer to avoid because the exact coding scheme doesn't matter.

> Any system less powerful than PA is trivially incomplete.

That is false. See here for examples of complete theories, for which Gödel's incompleteness theorems don't apply because they are not strong enough: https://en.m.wikipedia.org/wiki/Complete_theory

(Moreover, there are theories that are neither stronger nor weaker than PA. One such theory can give rise to an interesting proof of consistency of PA: https://en.m.wikipedia.org/wiki/Gentzen%27s_consistency_proo...)


> Why does that matter? This seems to me like an interesting bit of trivia about Robinson arithmetic, but I don't see why this matters to the main point. Any system less powerful than PA is trivially incomplete.

Isn't this backwards? Systems that are simple enough (e.g. Presburger arithmetic) are complete, which is what we'd expect.


I guess I should have been more precise: trivially incomplete with respect to truths about numbers, which is the thing we actually care about.

It is trivial to construct a complete system that is completely (!) uninteresting. The empty system is complete with respect to its own theorems, of which there are none, but that is about as uninteresting as it gets.

The interesting thing is that as soon as you get a system that is sufficiently expressive to capture even an elementary subset of what we would consider "truths about numbers" that system is necessarily incomplete, and it cannot be made complete by making it more powerful. That is the interesting thing, and proving it does not depend (AFAICT) on the details of any particular encoding of truths about numbers as long as it is well defined.


> It is trivial to construct a complete system that is completely (!) uninteresting.

It's also easy enough to construct complete theories that are rather interesting.

> That is the interesting thing, and proving it does not depend (AFAICT) on the details of any particular encoding of truths about numbers as long as it is well defined.

No, the encoding scheme doesn't matter, that's correct.

What does matter is that the theory doesn't need to be as powerful as a full-blown Turing machine. That's why all the machinery about Gödel's proof is necessary.

You seem to be mistaken about the fact that Gödel numbering is the complicated part about that proof, when that is in fact the easy part (and, if you really want, can be entirely replaced by some other encoding scheme), and the complicated part is about proving that a relatively weak theory like Robinson arithmetic, which doesn't even have induction axioms, can nevertheless completely express all primitive recursive functions, and that predicates such as "x is a proof of y" can be expressed as primitive recursive functions.


How exactly did you think "Any system less powerful than PA is trivially incomplete." conveyed any of this? You seem to be claiming two almost opposite things about PA.


> How exactly did you think "Any system less powerful than PA is trivially incomplete." conveyed any of this?

I thought it was understood that the word "incomplete" in the context of a discussion of Godel's Incompleteness Theorem meant incomplete with respect to the truths of arithmetic, because that is what Godel's Incompleteness Theorem is about. It didn't seem like a huge leap at the time.

> You seem to be claiming two almost opposite things about PA.

And those two things would be...?


> incomplete with respect to the truths of arithmetic

This is not at all what it's about. First of all, "incompleteness" has a very specific meaning in logic, namely that the following conditional (what we call "completeness") does not hold hold for some formal system S:

    Γ ⊨ φ → Γ ⊢ φ 
Second of all, Godel's first incompleteness theorem doesn't concern itself with "truths of arithmetic" but about any sufficently-powerful logical formalism. This is why I contend that Godel numbering is actually where the trickery kind of comes in: it's what gives Godel the self-referrentiality he needs in PA. A more general definition of the theorem, and one I generally favor over ones that bring up arithmetic, is[1]: "Any adequate axiomatizable theory is incomplete. In particular the sentence 'This sentence is not provable' is true but not provable in the theory."

[1] https://math.hawaii.edu/~dale/godel/godel.html


> This is not at all what it's about. First of all, "incompleteness" has a very specific meaning in logic, namely that the following conditional (what we call "completeness") does not hold hold for some formal system S:

> Γ ⊨ φ → Γ ⊢ φ

That is, unfortunately, a misunderstanding because the words "(in)completeness" have two very distinct meanings that are both used in logic.

One is the completeness of some system of logic, namely what you mentioned: "Γ ⊨ φ → Γ ⊢ φ". That's provably true for fist-order logic (though not for second-order logic), giving rise to Gödel's completeness proof. It remains true when Γ = PA (more specifically, first-order PA). In other words, any truth about numbers that follows from PA has a proof.

When we talk about the incompleteness theorems, however, we're talking about the completeness of theories. A theory Γ is said to be complete whenever for all sentences we have either Γ ⊨ φ or Γ ⊨ ¬φ. While there are examples of complete theories (I linked the Wiki article in a comment more up-thread, but it includes simple examples like the theory of dense linear orders sets with endpoints), it is also easy enough to construct a theory that is not complete. In particular (and this follows from the first incompleteness theorem), first-order PA is not complete. What this also implies is that not every true statement about the natural numbers follows from PA, or equivalently, that there are models of PA, i.e. structures in which all axioms of (again, first-order) PA are true, that are nevertheless distinct from the natural numbers because some sentences that are true in the one structure are false in the other.

More precisely, the first incompleteness theorem states something like "any effectively axiomatisable theory that is sufficiently strong is incomplete", where "effectively axiomatisable" means that we can determine in finite time whether something is an axiom of that theory, and "sufficiently strong" means something like "can encode every primitive recursive function" (the exact formulation depends a bit, because there are several different "flavours" of the first incompleteness theorem). In particular, it also states that any theory that includes Robinson arithmetic (e.g. PA) is incomplete.


> A theory Γ is said to be complete whenever for all sentences we have either Γ ⊨ φ or Γ ⊨ ¬φ.

You mean Γ ⊢ φ or Γ ⊢ ¬φ .

But note that what you said is not mutually exclusive with what /u/Tainnor said, i.e.

Γ ⊨ φ → Γ ⊢ φ

The proof of Godel's theorem is to exhibit a φ for which "Γ ⊢ φ or Γ ⊢ ¬φ" is false. From that you conclude that "Γ ⊨ φ → Γ ⊢ φ" is false on the assumption that "Γ ⊨ φ" is true. But that last part is an informal argument. We believe that "Γ ⊨ G" is true for the Godel sentence G but we (obviously) can't prove it.

Indeed, adding ¬G as an axiom to PA produces a consistent system where "Γ ⊢ G" is (trivially) true. The semantics of this system are quite interesting. The semantics of that system say that there exists a Godel number of a proof of G, but no natural number is that number. It is similar to adding axioms like "there exists a number whose successor is zero" or "there exists a number whose square root is the number whose successor is zero", which have obviously been very fruitful extensions of PA.


> You mean Γ ⊢ φ or Γ ⊢ ¬φ .

In principle no, though it's possible that it's defined like that in certain places, since in first-order logic, that amounts to the same thing (given the completeness theorem). But "completeness" is a notion from model theory that doesn't care about proof systems.

In a system where the completeness theorem is not true, such as second-order logic with standard semantics, we do have complete theories (such as second-order PA), where we always have Γ ⊨ φ or Γ ⊨ ¬φ, but that doesn't mean that every statement is either provable or disprovable.

> The proof of Godel's theorem is to exhibit a φ for which "Γ ⊢ φ or Γ ⊢ ¬φ" is false. From that you conclude that "Γ ⊨ φ → Γ ⊢ φ" is false on the assumption that "Γ ⊨ φ" is true. But that last part is an informal argument. We believe that "Γ ⊨ G" is true for the Godel sentence G but we (obviously) can't prove it.

I think that has it a bit backwards. It's not that we believe that PA ⊨ G - in fact, if we believed that, it would immediately imply PA ⊢ G, when that's exactly what we showed to be impossible. It's that we believe that G is true in the natural numbers, which is not the same thing as being true in PA, and that belief rests in our belief that PA is sound (which we can't prove, but seems reasonable).

(There are some different "flavours" of the incompleteness theorems, some of them rely on soundness of PA and others only on it being consistent.)

> But note that what you said is not mutually exclusive with what /u/Tainnor said, i.e.

That's because that's me. ;)


> That's because that's me. ;)

Oh, sorry, I thought I was responding to the parent comment by /u/dvt

Let's go back to basics. To begin with, let's make sure we're on the same page with regards to notation.

"Γ ⊢ φ" means that the string φ is a theorem under Γ. This is a syntactic property of Γ. The claim that some sequence of strings P is a valid proof of φ (and that therefore Γ ⊢ φ is indeed true) can be verified mechanically.

"Γ ⊨ φ" means that the string φ is true under some model of Γ. This is a semantic property of Γ and it cannot be checked mechanically.

We want "Γ ⊨ φ → Γ ⊢ φ" to be true, that is, we would like it to be the case that for every statement φ that is true under some model of Γ, we can mechanically prove that φ is a theorem of Γ, i.e. we can show that there exists a P which is a proof of φ. If we have such a P in hand we can always check it, but if we don't have a P the question of whether or not one exists is generally an open one.

But we actually want more than that. We also want Γ to be "interesting" in some sense. There is no sport in coming up with uninteresting systems for which "Γ ⊨ φ → Γ ⊢ φ" is true. The obvious example is the empty system and a corresponding empty model where every φ is both false and not a theorem.

So we impose some minimal structure on Γ, not because the rules of formal systems demand it, but because systems where, say, P and ¬P are both false are not very interesting. So we demand that e.g. there exist at least one true statement and at least one false statement in our model, not because there is any cosmic rule that requires this, but simply because the whole enterprise becomes a pointless exercise in navel-gazing if we don't.

That raises the natural question: what is the minimal amount of structure we can impose on Γ to make it "interesting" and worthy of study. And there are many answers to this, because there are many possible things that are potentially "interesting". Propositional logic. Set theory. Yada yada yada.

The surprise is that it turns out that with only a very small amount of structure we lose "Γ ⊨ φ → Γ ⊢ φ". As with the imposition of structure in general in order to make systems interesting and worthy of study, there are many different ways in which to impose enough structure to lose this desirable property. One way, the way which historically was the first to be discovered, is to impose enough structure that the system can be modeled by the natural numbers. But this is not the only way. Another way is to impose enough structure that the system can be modeled by a simple machine consisting of a tape and a lookup table. Yet another way is to impose a structure that the system can be modeled by two urns that contain stones, and the ability to add and remove stones from the urns, and look in an urn to see if it is empty or not. There are many many variations on the theme.

All of these things turn out to be in some sense "equivalent", and because of that the details of the structure that we impose on the system are uninteresting. It doesn't matter if we use numbers or tapes or stones, there is a certain point beyond which we lose "Γ ⊨ φ → Γ ⊢ φ". And the big surprise is that the threshold is very, very low. It takes very little structure to cross the threshold. There are systems for which "Γ ⊨ φ → Γ ⊢ φ" does hold, but those are necessarily of very limited utility.

The proof of the falseness of "Γ ⊨ φ → Γ ⊢ φ" for any given system depends on the details of the system, but the general procedure is always the same: you produce a mapping from proofs to elements of the model. You then use that mapping to produce an element of the model (called G) which is true under the model if and only if there does not exist an element of the model corresponding to a proof of G under your mapping.

The key is the mapping from proofs to elements of the model, and Godel numbers were the first such map, but they are not the only such map, and there is nothing particularly special about them other than that they were the first to be discovered. They might have some advantages if you are actually writing out a detailed formal proof of Godel's theorem by hand, just as buggy whips can be useful if you are driving a horse and buggy. But there aren't many compelling reasons to do either one in today's world except as an exercise in nostalgia.


> "Γ ⊨ φ" means that the string φ is true under some model of Γ.

It means that φ is true in every model of Γ.[0]

> If we have such a P in hand we can always check it, but if we don't have a P the question of whether or not one exists is generally an open one.

> The surprise is that it turns out that with only a very small amount of structure we lose "Γ ⊨ φ → Γ ⊢ φ".

That is not true for first-order logic. In first order logic Γ ⊨ φ → Γ ⊢ φ is always true. That's the completeness theorem.[1] This really only starts to become false when you go to e.g. second-order logic, but first-order logic is really the most commonly used type of logic and plenty sufficient for almost all of mathematics (though that becomes a philosophical debate to some extent).

Gödel's theorem is really about first-order logic, and it says that even though completeness of FOL is a nice thing, it doesn't help us with the goal of finding an effective theory of the natural numbers (effective in the sense that proofs are mechanically verifiable).

[0]: That's the standard definition, as it can be found e.g. in https://milneopentextbooks.org/download/friendly-intro-2e-up... (Definition 1.9.1) or https://books.google.it/books?id=u0wlXPHATDcC&pg=PA67#v=onep... (Definition 3.4.4 (iv)) and many other textbooks on logic.

[1]: See e.g. https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_th... - and every introductory textbook about about mathematical logic, including the ones referenced above.


> It means that φ is true in every model of Γ.[0]

> (Definition 1.9.1)

That is not a definition of ⊨, that is a definition of logical implicature. Not the same thing.

The ⊨ symbol is defined in definition 1.7.4, and it is used in a different sense than we are using it here. The left-hand argument to the ⊨ operator in that book is a model (an "L-structure" using the book's terminology), not a theory. But Γ is a theory. So writing Γ ⊨ φ is fundamentally incompatible with the definition of ⊨ given in that book.

And our disagreement turns on this, at least in part, because one of the consequences of the incompleteness theorem is that there are models of PA (or RA or whatever your favorite axiomization of arithmetic happens to be) where G is true, and there are models where G is false.

> That is not true for first-order logic. In first order logic Γ ⊨ φ → Γ ⊢ φ is always true.

Only if you restrict what you mean by "first-order logic". Under some definition it is manifestly not true: any (first order) theory that includes arithmetic is incomplete.

> first-order logic is really the most commonly used type of logic and plenty sufficient for almost all of mathematics

Yes, and when you include arithmetic (or any of the other variants on that theme) it becomes incomplete.


> That is not a definition of ⊨, that is a definition of logical implicature. Not the same thing.

You should read that definition again. It's really hard to have a discussion if we can't agree on standard notation. The double turnstyle symbol has multiple uses (sad as it is), and when there is a structure on the LHS it means something different than when there is a set of formulas on the LHS. The second sense is defined in 1.9.1 and is based on the first sense.

> And our disagreement turns on this, at least in part, because one of the consequences of the incompleteness theorem is that there are models of PA (or RA or whatever your favorite axiomization of arithmetic happens to be) where G is true, and there are models where G is false.

This is true.

> any (first order) theory that includes arithmetic is incomplete.

That is basically true, assuming that the theory is also sound (because we can construct unsound complete theories of arithmetic), and if by "arithmetic" we mean "addition and multiplication" (otherwise, see Presburger Arithmetic).

Gödel's First Incompleteness theorem does give you a bit more than that though (at least in its strongest variants): it doesn't make any assumptions about the theory being sound (mainly because that's very hard to prove), it just needs the theory to be "strong enough" and to be consistent (i.e. not leading to a contradiction). So even if we were willing to accept some false statements we can't get a complete theory (technically, that's the Gödel-Rosser proof, Gödel's original proof had a slightly different condition). It also gives you a concrete example of a Gödel sentence that can be construed, which leads rather directly to the second theorem.


> The double turnstyle symbol has multiple uses

Ah. TIL.

> This is true.

OK, it's good that we agree on that, but it doesn't really matter. What matters is whether or not the english statement "G cannot be proven" (or some variation on that theme) is true or false. More to the point, what matters is what it takes to persuade a human being that it is true or false. The double-turnstyle symbol may be useful, but it is not the main event.

> That is basically true, assuming that the theory is also sound

OK, so we are basically in agreement on the things that actually matter. (Ironically, AFAICT Leary & Kristiansen don't actually define the term "sound". But again, whatever. It doesn't really matter for the substance of the point I am trying to make.)

The point I am trying to make is this: the reason Godel's theorem matters is that it is a negative result on the quest to reduce all of mathematics to a mechanical process. Implicit in this goal is that the process yield results that are in some sense "reasonable", i.e. if a putative mechanical process for doing math ever yields the result that 2+2=5 then we can be pretty sure that something has gone wrong even without knowing anything about double-turnstyles or soundness or validity or consistency or any of the other myriad things that logicians concern themselves with. These are all just ways of battening down various hatches, but none of them actually matter. What matters, the reason Godel's theorem is a thing, is that it is what informs us that it is impossible to mechanize all of mathematics, and that attempting to do so is a fool's errand. Finding the weakest possible conditions under which this result holds is of secondary importance, of interest only to logicians. If you can add 1 and check for equality -- or any of a myriad other combinations of trivial operations that allow you to enumerate proofs -- you've lost. That is what matters. And that can be proven without Godel numbers.


"Soundness" is a term that AFAIK comes from philosophy. An argument is correct if it follows logically from the premises, a sound argument is one in which the premises are also "true". In the context of theories, theories are sound if they are not merely consistent but also "true", i.e. correct with respect to reality.

You quickly see that this gets us into murky territory where you have to argue, outside of logic itself, whether or not certain things are "true". That's why soundness is not necessarily talked about much in an introductory mathematical logic text such as L&K.

The reason why this matters in the context of Gödel's theorems, though, besides a purely philosophical point of view, is that "PA is sound" can not be expressed in the language of arithmetic, but "PA is consistent" can be expressed in that language, and therefore Gödel's first theorem itself can be expressed in arithmetic. That is the key that establishes the link between the first and the second incompleteness theorem, the latter of which expresses that a theory of sufficient strength can't prove itself consistent.[0]

A naïve proof via Turing Machines, to my understanding, gives you something equivalent to Godel's theorem provided PA is sound, but not just simply provided PA is consistent.

Otherwise, I agree with your conclusions about the meaning of Gödel's theorems,[1] and I even agree that knowing all the technical conditions under which they apply shouldn't be of importance to non-logicians. I don't object to teaching CS students a version of Gödel's first incompleteness theorem via Turing Machines or even to some hand-waving where necessary.

But that's not the same thing as claiming that Turing Machines make the original proof ideas obsolete.

> And that can be proven without Godel numbers.

I've said it before but Gödel numbers are a trivial technical detail. What matters is that an encoding scheme exists, not which one it is. You can take Gödel's original proof and just replace Gödel numbering with your favourite encoding scheme and nothing substantial about the proof structure will change (except that you will want to substitute the "factoring primes" routine with a corresponding decoding routine for your preferred scheme).[2]

It just so happens that mathematicians actually like Gödel numbering because primes are very simple objects and every maths major learns about unique factorisation and the infinity of primes very early on. If as a computer scientist, you disagree, no big deal: just substitute your favourite encoding scheme.

That's an entirely different discussion though as the one whether you want to prove Gödel's theorems via Turing machines or via purely logical means and the theory of primitive recursion.

[0]: That, by itself, is not terribly interesting - if the theory was inconsistent, its own consistency proof wouldn't matter (since an inconsistent theory proves everything). The reason it becomes interesting is because that also implies that a theory can't be proven consistent by a weaker theory either (since every proof in the weaker theory is a proof in the original theory). This is what essentially kills the Hilbert program: We can't, say, agree that PA is sound and consistent (which seems easy enough to believe) and use that in order to prove ZFC consistent, or some such.

[1]: Although I'm not sure they're necessarily "practically" relevant. Had it turned out that model checking of PA is "only" NP-complete, instead of uncomputable, it wouldn't practically have made it much better.

[2]: This is, for example, briefly discussed in section 19.2 of this textbook about Gödel's theorems, which I highly recommend to anyone who does care about the technical details: https://www.logicmatters.net/resources/pdfs/godelbook/GodelB...


> I've said it before but Gödel numbers are a trivial technical detail.

Actually, you seemed to be saying the exact opposite before when you wrote:

> Your point about ASCII codes ignores that we don't just want individual symbols to be numbers, we want entire formulas to be numbers too.

And at this point I'm still not sure if you really understand this or not because:

> you will want to substitute the "factoring primes" routine with a corresponding decoding routine for your preferred scheme

Well, yeah, obviously, but translating between strings of symbols and numbers is trivial when your alphabet is ASCII (or LATIN1 or UTF8 or whatever). You don't need to factor, all you need to do is divide/modulo by powers of 2. This is just obvious to anyone who has even the most elementary knowledge of modern computers.

> A naïve proof via Turing Machines, to my understanding, gives you something equivalent to Godel's theorem provided PA is sound, but not just simply provided PA is consistent.

No, it is vastly simpler than that. All you need is a computable mapping between programs and numbers. Once you have that, any claim about a program can be translated into a claim about a number (and vice versa). "P is a program that halts when run on input I" has a corresponding "N is a number that describes a program that halts when run on input I". From that, Godel's theorem is an elementary proof by reductio: if arithmetic were decidable, you could solve the halting problem by translating the question of whether or not a program P halts into the question of whether a number N describes a program that halts.

Note that you essentially get soundness "for free" because the halting theorem is not grounded in axioms, it is grounded in physics. We don't have to wonder if our models of Turing Machines are "true". We can simply build a TM and observe that it behaves according to the model. And so the details of your axiomatization of math don't matter at all. Whatever axiomatization you can come up with, as long as you can use it to do arithmetic, and as long as someone can write a proof-checker for it, then Godel's theorem pops out as an elementary result.

> mathematicians actually like Gödel numbering because primes are very simple objects

Yes, I get that. But Turing machines are pretty simple too. The thing that makes TMs better than primes is that we can actually build a physical artifact whose behavior corresponds to the mathematical model of a TM. You can't do the same thing for primes because primes are not models of physical systems.


I really don't wish to belabour the primes vs. ASCII discussion because: it really is but a trivial implementation detail. You want to have a bijection between formulas and the natural numbers (such that certain operations on formulas, such as composition and extraction of subformulas can be computed). Primes do that, as do ASCII sequences.

> From that, Godel's theorem is an elementary proof by reductio: if arithmetic were decidable, you could solve the halting problem by translating the question of whether or not a program P halts into the question of whether a number N describes a program that halts.

I think you again misunderstand the full implications of Gödel's first theorem.

Gödel's first theorem: > Any theory of arithmetic that is sufficiently strong and consistent is incomplete.

What you have just proven is a corollary of that theorem, but actually a weaker result: > Arithmetic truth is not decidable

Gödel's theorem works even if PA is not sound, but merely consistent. And it also works for potential other, unsound theories of arithmetic, as long as they're strong enough and consistent.

Your proof via Turing machines does not: if PA is unsound, and if from PA you derive that the "impossible" program halts (or does not halt), it doesn't tell you anything, because the conclusions of PA can't be trusted. Therefore you can't derive the necessary contradiction.

Again, this matters not only philosophically, but also technically because "If PA is sound, it is incomplete* can't be written as a statement of arithmetic, but "If PA is consistent, it is incomplete" can, and this is the necessary link between the first and the second theorem.

---

I think there's a consistent thread in your comments where you think things are "obvious" and "true" because we can observe them somehow physically (e.g. your assertion that the Halting Problem is "not grounded in axioms, but in physics" - to which I would again object, but let's not go into that), but this is not how mathematics (and logic) work.

It's fine if you don't care about it personally, but Gödel's theorems have deep mathematical and philosophical implications about systems of reasoning which are not dependent on our physical world.

But I don't think this discussion will go anywhere.


> I really don't wish to belabour the primes vs. ASCII discussion because: it really is but a trivial implementation detail

OK, so we agree.

> What you have just proven is a corollary of that theorem, but actually a weaker result: > Arithmetic truth is not decidable

No. My argument does not depend on the soundness of the system. All it depends on is that the system be able to do arithmetic correctly, so it requires consistency (because it needs to be able to identify e.g. "1+1=3" as false), but not soundness. You can throw in as many additional axioms as you like, including false ones. As long as the false axioms do not destroy the system's ability to correctly do arithmetic the proof holds. And this is true for Godel's proof as well.

[UPDATE] I just realized that what I wrote earlier:

> Note that you essentially get soundness "for free"

is misleading at best, and probably just flat-out wrong. I'm actually not quite sure what I was thinking when I wrote that. It might have been something along the lines of, "The fact that we can build TMs is evidence that the mathematical models of TM's are in some sense 'true'", but this is obviously not on point. So I retract that part of my argument.

[END UPDATE]

> systems of reasoning which are not dependent on our physical world

You cannot have systems of reasoning that are not dependent on our physical world. Reasoning is a physical process conducted by physical systems. Mathematicians are physical systems. Symbols are physical things: patterns of ink on paper, or electrons moving around, or vibrations in a medium. Any claim about a system of reasoning is necessarily a claim about a physical system. Platonic ideals do not actually exist, not even in math.

We can imagine situations where this is not the case, but these are counterfactual. We can think about "systems of reasoning which are not dependent on our physical world" just as we can think about Santa Claus or Darth Vader. We can even make factual claims about them, e.g. "Santa Claus has a beard" or "Darth Vader is Luke Skywalker's father". Claims about "systems of reasoning which are not dependent on our physical world" have exactly the same ontological status as claims about Santa Claus and Darth Vader.

The reason that we care about "systems of reasoning" at all is that some "systems of reasoning" correspond to physical reality in ways that give us leverage to navigate that reality more effectively. Absent that, the entire enterprise would be a complete waste of time.


> consistency (because it needs to be able to identify e.g. "1+1=3" as false)

that's soundness, not consistency.

You can build consistent theories of arithmetic in which 1+1=3 is true. The easiest model for such a theory is a structure with a single element.

But that's maybe a bad example, as it's too weak for Gödel's theorems to apply to it. So let's consider instead models of Robinson arithmetic. There are models of Robinson arithmetic, for which the sentence "for all x, Sx != x" is false,[0] even though we clearly want it to be true in the natural numbers. So if we add the axiom "there is an x for which Sx = x" to Robinson arithmetic, we get a theory which is arithmetically unsound, so whatever we derive from it, does not readily apply to Turing machines. Yet, Gödel's theorems apply to this theory just as well (assuming it is consistent, which we can't prove, same as we can't prove it for PA).

I'll leave it at that. We're talking at cross-purposes, since we can't even get terminology straight.

[0]: https://math.stackexchange.com/questions/1066087/model-of-ro...


> But that's maybe a bad example

Gee, ya think?

> So let's consider instead models of Robinson arithmetic.

OK.

> There are models of Robinson arithmetic, for which the sentence "for all x, Sx != x" is false

OK. So?

> even though we clearly want it to be true in the natural numbers.

Who is "we"?

> if we add the axiom "there is an x for which Sx = x" to Robinson arithmetic, we get a theory which is arithmetically unsound

OK. So?

> whatever we derive from it, does not readily apply to Turing machines

Why do you think that this does not "readily apply to Turing machines"? Turing machines do not natively do arithmetic. They just write symbols to a tape according to a state transition table.

Do you really believe that writing a proof checker for Robinson arithmetic, with or without your new axiom, is different in any salient way than writing a proof checker for PA?

(And would you please consider the possibility that I might not be a complete idiot?)


> I thought it was understood that the word "incomplete" in the context of a discussion of Godel's Incompleteness Theorem meant incomplete with respect to the truths of arithmetic, because that is what Godel's Incompleteness Theorem is about.

What does "with respect to the truths of arithmetic" even mean? How is Presburger Arithmetic "incomplete" in this sense?

> And those two things would be...?

That it's so complex as to be obviously incomplete, but also so simple that anything smaller than it is obviously trivial. There are interesting systems smaller than PA, and none of this is obvious.


> What does "with respect to the truths of arithmetic" even mean?

It means: with respect to some axiomatization of arithmetic that allows Godel numbers to be constructed. Peano arithmetic, for example.

> obviously incomplete

I guess that depends on what you consider "obvious". It was far from obvious before Godel proved it. But nowadays it's not hard to understand. One generation's "obvious" is a prior generation's major breakthrough.

The big surprise in Godel's theorem is that the amount of complexity you need to introduce into a theory to render it incomplete is quite small. Peano arithmetic is sufficient, but not necessary.


> It means: with respect to some axiomatization of arithmetic that allows Godel numbers to be constructed.

That's circular logic; why would that be an interesting question to investigate absent the incompleteness theorem?

> The big surprise in Godel's theorem is that the amount of complexity you need to introduce into a theory to render it incomplete is quite small.

Agreed, but that implicitly requires understanding that small theories like Presburger Arithmetic are complete (which they are in every usual meaningful sense, but you seem to be claiming otherwise).


> That's circular logic; why would that be an interesting question to investigate absent the incompleteness theorem?

Because the amount of logical infrastructure that you need is surprisingly small. All you need is to be able to do basic grade-school arithmetic: add, subtract, multiple, divide, test for equality. And you don't even need all that. You can build everything you need from much simpler primitives.

The reason this is not circular is that before Goedel's theorem mathematicians were trying to construct a formal system that would allow you to derive all true statements about not just arithmetic, but all of mathematics. It turns out that this is impossible. As soon as you can do arithmetic, Goedel's theorem applies and you have lost.

> that implicitly requires understanding that small theories like Presburger Arithmetic are complete

I don't see why. Goedel's theorem matters in the context of trying to reduce all of mathematics to a formal system. Presburger arithmetic fails not because it is syntactically incomplete but because it cannot (for example) do division, so it cannot formulate (for example) the concept of a prime number and so it cannot (for example) prove the fundamental theorem of arithmetic. So yes, PB is decidable, but it is not a plausible candidate for a complete (in the informal sense) formulation of all of mathematics. Goedel's theorem shows that if you can add, subtract, multiply and divide and test for equality, then you have lost. And if you can't do those things, then you have also lost, but you don't need a theorem to tell you that.


ASCII has 7 bits

ASCII encodes strings as powers of 128.

:-D

Otherwise seems legit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: