Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> For me it just means that you need to put additional restrictions of what you can do in math so that kind of statement is excluded from math.

The thing about Godel's proof is that he didn't just come up with one example of a "this statement is not provable" sentence; he proved that any sufficiently complex math system will have unprovable sentences (some true, some false, some you'll never be able to know). And "sufficently complex" is not all that complex! So if you want to be able to do calculus, or even most arithmetic, you can't just "put additional restrictions ... so that kind of statement is excluded from math."




He proved that any sufficiently complex math system will have unprovable sentences.

That's mostly because "sufficiently complex" is defined by mathematicians, not computer scientists. In particular, it includes infinities, which do not exist in the physical universe. Godel's proof requires allowing unlimited depth recursion, which does not exist in the physical world.

The halting problem is decidable for any deterministic system with finite memory. Either you halt, or you repeat a state. This covers most real-world computer programs.

There's a useful subset of arithmetic and logic which is completely decidable. It contains integer addition, subtraction, multiplication by constants, and all the logic operators. This covers most subscript checking in programs, which is quite useful. You can probably add modular arithmetic with a fixed upper bound to that subset.

Now, some systems may require very large amounts of work to prove, but that's different from being undecidable. "Very large" is not infinite. That heads us off into the theory of lower bounds, P = NP, and all that, where there are still many open questions.

This knocks out many of the sillier claims about undecidability making something impossible in the real world.


>In particular, it includes infinities, which do not exist in the physical universe.

Bold claim. As far as I know the centers of black holes are indeed non-removable singularities of (some) solutions to GR equations.

What's funny is I had a debate with a cosmologist where I was on your side of the debate because he was a mathematical realist (and therefore forced to reconcile the same divergences with the "reality" of math).


> The halting problem is decidable for any deterministic system with finite memory. Either you halt, or you repeat a state. This covers most real-world computer programs.

Problem being that you cannot decide whether a system with such finite memory halts without using another different system that has even more memory. And if that larger system doesn't "real-world" exist, can you truly say that the halting problem is decidable?


You can determine if a program halts by running two copies in lockstep, one at half the speed of the other. If their states are ever the same after the first step, they're in an infinite loop.

That was actually used in an early batch system for running student jobs in an interpreter. A successful student job ran for under a second; one in a loop ran for 30 seconds until it was killed for taking too long. So detecting simple loops, even with substantial extra overhead, was worth it.


Yes, but the point is, if you have an amount of memory M, there exists an amount of memory N for which there is no program that fits into M that can successfully predict whether an arbitrary that fits into N will halt. No infinites required.

(since your example is using two copies, you're essentially using 2M memory)


Exactly!

For the same reason, when someone uses halting problem to 'explain' why their program freezes, they are wrong. They are wrong on multiple counts actually.

For pure mathematics, this stuff is relevant. For all practical purposes, it is not (as far as infinite-precision analog memories are not involved).


Sure you can. Just ban the concept of provability from your math language (or some elements that lead to it). Same way like they banned weird kinds of sets and started talking about set families instead of sets of sets.

If you declare that set being its own element is nonsensical statement you no longer have a paradox with set of sets that are not their own elements.

Base assumption of math is that correctly constructed mathematical statement is not nonsensical. It must be true or false or not determined by current set of axioms. And I think that assumption of sensibility of any math statement leads to paradoxes.


In the case of set theory, you can exclude sets of sets and still have an extremely rich and deep mathematical theory of sets. (But it is definitely still incomplete - for example ZFC can't prove the continuum hypothesis, and if you take the continuum hypothesis or its negation as an axiom, there are still infinitely more unprovable statements in that mathematical theory)

In the case of Godel's incompleteness theorem, excluding the amount of math you need to in order to avoid unprovable sentences leaves you with only very simple axiom systems. If your mathematics is complete enough to even do simple arithmetic (i.e. Peano arithmetic), then there are unprovable statements.

It's not as easy a fix as Russell's set theory paradox, which is why it's an important and foundational aspect of nearly all mathematics even today.


A set of elements that lead to the concept of provability[1] are: 0, 1, addition, multiplication, quotient, remainder and inequality.

Quotient and remainder are definable implicitly in the language of number theory (the language of Peano Arithmetic). Sometimes inequality is explicitly defined, and sometimes it is implicitly defined using exists and addition.

From these elements you can go on to define the Goedel beta function, which allows you to encode lists of numbers and extract the nth element from such an encoding. From there you can go on to define arbitrary primitive recursion (and even more). From there provability can be defined as a primitive recursive function.

I'm not sure which element there you want to ban. Maybe you want to ban multiplication?

[1]http://r6.ca/blog/20190223T161625Z.html


You can ban the way how you combine the elements. When you reach the step where you define primitive recursion you could say that statements that involve recursion of depth of more than hundred are not true or false or undecided but just meaningless and excluded from mathematical consideration.

I know it sounds silly but the statements that prevent infinite recursion in programming languages often do look silly. They look like a hackish stopgap that doesn't fit the pristine recursive algorithm. Yet they work and protect you at the cost of the recursion not to be able to correctly deal wit stuff that would need a deeper recursion.


These statements are not phrased in terms of recursion. Once you inline all the definitions then they are phrased in terms of arithmetic. That's the whole point of all this!

Have a look at http://tachyos.org/godel/Godel_statement.html and tell me exactly why that formula is banned? Is the formula too long? To many uses of multiplication? Give me a decision procedure.


Yes. To avoid paradoxes you need to limit nesting, inlined or not. This statement reeks of uncontrolled infinities with all the quantifiers and I don't even know what some symbols mean like 0''' Because of my ignorance I can't point the exact problem here but I don't think it is the multiplication.

Alternatively you might just redefine the concept of something being true such that you only provable things are true and unprovable ones are either false, undecidable or nonsensical.

And nonsensical thing is defined as a statement thats unprovable but seemingly true.

I think there many ways to fix this just by restricting yourself with how you define things. And it's not about restricting arithmetc because that's not the core of the issue, that's just the (simplest?) example.


So your problem is with the use of unbounded quantifiers that range over all natural numbers?

So for example you would consider "∀x. ∀y. x + y = y + x" a nonsense statement because we are quantifying over all natural numbers, and there are an infinite number of natural numbers, so we cannot quantify over them?

(For the record the ' in 0' or x1' is a post-fix notation for the successor operation. See http://tachyos.org/godel/proof.html for details).


I don't know what the problem with that particular long statement is.

You might just say that a thing is nonsensical if it's not provable but is not false either.

This might be sensible 'stack overflow' exception if we really are unable to provide reasonable limits on self reference reference and reasoning relying on infinities.


Nonsensical if it's not provable with respect to what theory exactly? Elementary function Arithemetic? Primitive Recursive Arithemetic? Peano Arithemtic? Martin-Löf type theory? ZF set theory? ZFC+"there exist an infinite number of Woodin cardinals"? "The set of true statements of number theory"?

Each of these logical theories are each able to prove an increasing number of arithmetic propositions. What is or is not provable is relative the deduction system or selection of axioms.

For example, that big expression that I linked to is designed so that isn't provable in Peano Arithmetic, but it will be provable Martin-Löf type theory, ZF set theory, etc.


If it's not provable and not false in F it's nonsensical in F. Might become sensible with more axioms.

I'd like to ask you a question...

Can you take the expression you linked to and add its negation as the additional axiom to Peano arithmetic? Or would it lead to some contradictions?


The [Gödel 1931] proposition I'mUnprovable does not exist

because the fixed-point on which it is based does not exist

since it violates restrictions on orders of propositions.


The statement clearly that I linked to clearly exists. It's right there in front of your eyes.


If proposition I'mUnprovable exists in foundations, then the

foundations are inconsistent, which was proved in the article linked

elsewhere in this discussion.


Does the proposition I linked to exist? Can you see it? Is it not a well-formed proposition in the language of number theory?

How about this proposition: "∀x. ∀y. x + y = y + x". Does it exist? Is it not a well-formed proposition in the language of number theory?


Did you not get the point?

If [Gödel 1931] proposition I'mUnprovable exists in foundations,

then foundations are inconsistent.

See

https://papers.ssrn.com/abstract=3603021


I'm sorry that no I don't get the point. You keep talking about [Gödel 1931] proposition I'mUnprovable, but today is 2021 not 1931, and I'm talking about a specific proposition that I've linked to which is, what appears to me to be a clearly well defined first-order logical proposition involving classical logic, zero, successor, plus and times.

I want to know if you object to the existence of that formula that you can see on your screen with your own eyes, and if you do object to it why you do.

Because I contend that that formula is of exactly the same character as Goedel's statement, with the difference being that with a computer, and modern encoding functions, we can actually strip away all the definitions and compute the fixed point and literally print it out onto your screen. It is right there in front of your eyes. Look at it!

Yet you keep on insisting that fixpoints don't exist despite the fact that I have pointed you directly to a formula that has been specifically computed to satisfy a logical fixed point equation.

It's like saying that there cannot be a fixed point of the function f(x) := 3x-10 by waiving your hands and claiming functions don't have fixed point because numbers need to be ordered. But that is daft. All you have to do is compute f(5) = 3*5-10 = 5 to see that 5 is a fixed point of f.

Take a look at this. The proposition 0=0 is a fixed point of phi(X) = ¬(¬X). Just do the logical deduction to see 0=0 ⇔ ¬(¬ 0=0) is a true arithmetic statement. See even fixed points for propositions sometimes exist!


Orders on propositions were introduced by Bertrand Russell

to prevent inconsistencies in foundations of mathematics.

Strong types prevent construction of I’mUnprovable using the

following recursive definition:

     I’mUnprovable:Proposition<i>≡⊬I’mUnprovable 
Note that (⊬I’mUnprovable):Proposition<i+1> in the right-

hand side of the definition because I’mUnprovable is a

propositional variable in the right-hand side.

Consequently,

    I’mUnprovable:Proposition<i>⇒I’mUnprovable:Proposition<i+1>
which is a contradiction. The crucial issue with the proofs

in [Gödel 1931] is that the Gödel number of a proposition

does not capture its order. Because of orders of

propositions, the Diagonal Lemma [Gödel 1931] fails to

construct the proposition I’mUnprovable.


Perhaps a better explanation - it's similar to the halting problem in computer science. There are programs p where "Program p halts on input x" is unprovable (equivalently there is no computable way to determine which p,x halt), and the key is that we have no way of knowing which programs p and x this is true for, of the ones we don't yet have proofs/halt-prediction-programs about.

You may want to argue that this can be "solved" by never letting programs take other programs as input, which isn't wrong, exactly, but you're left with not very much you can compute in that paradigm for computation.


I think it's important to point out that you can do calculus and arithmetic and only construct proofs that are provable. The possibility of deriving unprovable but true statements in an axiomatic system doesn't mean they're common or you can't prove all the satatements one has encountered thus far.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: