Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. It’s a basic result proven in any mathematical logic course at the senior or beginning graduate level.

If a Turing Machine can’t prove that it can’t prove the consistency of its axiomatic system from within that system but that it could from within a larger system but I can then this is evidence against your belief. At least as I see it.

I have the minority view that the Incompleteness results (the proof of them) are a limitation of artificial intelligence.



> Yes. It’s a basic result proven in any mathematical logic course at the senior or beginning graduate level.

OK, but note that you've moved the goal posts here. Your original question was:

> Can an AI come up with the Incompleteness Theorems?

There is a difference between coming up with those theorems, and being able to reproduce them after having been shown how. There can be no doubt that an AI can do the latter, it's not even speculative any more. ChatGPT can surely recite the proof of the consistency of PA within ZFC.

> I have the minority view that the Incompleteness results (the proof of them) are a limitation of artificial intelligence.

Yeah, well, there's a reason this is the minority view. How do you know that the incompleteness results don't apply to you? Sure you can see that PA can be proven consistent in ZFC, but that is not the same thing as being able to see the consistency of the formal system that governs the behavior of your brain. You don't even know what that formal system is. It's not even clear that it's possible for you to know that. It's possible that your brain contains all kinds of ad-hoc axioms wired in by millions of years of evolution, and it's possible that these are stored in such a way that they cannot be easily compressed. Evolution tends to drive towards efficient use of resources. So even if you had the technology to produce a completely accurate model of your brain, your brain might not have the capacity to comprehend it.

History is full of people making predictions about how humans will ultimately prove to be superior to computers. Not a single one of those predictions has stood the test of time. Chess. Go. Jeopardy. Writing term papers. Generating proofs. Computers do all these things now, and they've come to do them in the span of a single human lifetime. I see absolutely no reason to believe that this trend will not continue.


Thanks for the response and perspective. I'll contemplate it more later.

While I personally may not have come up with the Incompleteness results humans did. The discussion is about human intelligence in general (particularly applied to bright people) not about my own intelligence and its limitations.

The second order Peano Axioms are categorical while the first order Peano Axioms are not. The first order axioms are used precisely because it was the dream of Hilbert and others to reduce mathematics to a computable system. The dream can not be realized. We humans can prove things like Goodstein's theorem. A statement that is true in the second order PA. How will a computer prove such a thing? There is no effective, computable means, for determining if a given statement is an axiom in PA.

I don't know anything about the chess algorithms but my understanding is that they rely, essentially, on searching a vast number of possible outcomes. Can a computer beat Magnuson with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

Thanks for the discussion. I'll contemplate what you've written and any response you care to make. I won't respond further since I'm delving into areas I know little about.

https://en.wikipedia.org/wiki/Chinese_room


> humans did

No. Not humans. One human.

> The discussion is about human intelligence in general (particularly applied to bright people) not about my own intelligence and its limitations.

OK, but if you're going to talk about human intelligence in general then you have to look at what humans do in general, and not what an extreme outlier like Curt Godel did as a singular event in human history.

> particularly applied to bright people

And how are you going to measure brightness?

> How will a computer prove such a thing?

I have no idea. (I was going to glibly say, "The same way that humans do", but one of the lessons of AI is that computers generally do not do things the same way that humans do. But that in no way stops them from doing the things that humans do.) But just because I don't know how they will do it in no way casts doubt on the near-certainty that they will do it, possibly even within my lifetime given current trends.

> I don't know anything about the chess algorithms but my understanding is that they rely, essentially, on searching a vast number of possible outcomes.

Yes, that's true. So?

> Can a computer beat Magnuson

I presume you meant Magnus Carlsen? Yes, of course. That experiment was done last year:

https://www.youtube.com/watch?v=dgH4389oTQY

> with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

What difference does that make? But the answer is still clearly yes because the computer could simply emulate Carlsen's brain. A 10x speed advantage would surely be enough to win.


I don’t believe you are engaging in a good faith discussion. Your previous comment is worthy of further contemplation but not this one. A computer can not emulate a person’s brain. At least not now and there isn’t sufficient evidence to believe that is even theoretically or practically possible to do in the future.

Your response here implicitly admits there’s difference in human thinking and computer “thinking”. A chess program that just searches a vast number of possibilities and chooses the best one is not thinking like a human. It’s not even close.

* > How will a computer prove such a thing? I have no idea*

If you knew about these things you’d know that it isn’t possible to have an algorithm that halts in a finite number of steps that determines whether or not a given statement is an axiom in 2nd order PA. A computer is incapable of reasoning about such things.


> A computer can not emulate a person’s brain.

Earlier you wrote:

> Can a computer beat Magnuson with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

If a computer can't emulate a person's brain then how are you going to assess whether or not the number of computations it's doing is "within one order of magnitude of what a human can do"?

> A computer is incapable of reasoning about such things.

You want to bet on that? Before you answer you'd better re-read your claim very carefully. When you realize your mistake and correct it, then my answer will be that humans aren't guaranteed to be able to determine these things in a finite number of steps either. There's a reason that there are unsolved problems in mathematics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: