Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're being overly dismissive of the argument. Admittedly my recollection is hazy but here goes:

Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).

When we attempt to formalize even a relatively basic branch of human thinking, simple whole-number arithmetic, as a system of finite symbols and rules, then Goedel's theorem kicks in. Such a system can never be complete - i.e. there will always be holes or gaps where true statements about whole-number arithmetic cannot be reached using our symbols and rules, no matter how we design the system.

We can of course plug any holes we find by adding more rules but full coverage will always evade us.

The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.



> Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).

> [...] there will be truths that the computer can simply never reach.

It's true that if you give a computer a list of consistent axioms and restrict it to only output what their logic rules can produce, then there will be truths it will never write -- that's what Godel's Incompleteness Theorem proves.

But those are not the only kinds of programs you can run on a computer. Computers can (and routinely do!) output falsehoods. And they can be inconsistent -- and so Godel's Theorem doesn't apply to them.

Note that nobody is saying that it's definitely the case that computers and humans have the same capabilities -- it MIGHT STILL be the case that humans can "see" truths that computers will never be able to. But this argument involving Godel's theorem simply doesn't work to show that.


I don’t see the logic of your argument. The fact that you can formulate inconsistent theories - where all falsehoods will be true - does not invalidate Gödel’s theorem. How does the fact that I can take the laws of basic arithmetic and add the axiom “1 = 0” to my system mean that Gödel doesn’t apply to basic arithmetic?


Godel's theorem only applies to consistent systems. From Wikipedia[1]:

  First Incompleteness Theorem: Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F.
If a system is inconsistent, the theorem simply doesn't have anything to say about it.

All this means is that an "inconsistent" program is free to output unprovable truths (and obviously also falsehoods). There's no great insight here, other than trivially refuting Penrose's claim that "there are truths that no computer can ever output".

[1] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...


You’re equating computer programs producing “wrong results” and the notion of inconsistency - a technical property of formal logic systems. This is not what inconsistency means. An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it. Such formalizations are not interesting or even relevant to the discussion or argument.

I think much of the confusion arises from mixing up the object language (computer systems) and the meta language. Fairly natural since the central “trick” of the Gödel proof itself is to allow the expression of statements at the meta level to be expressed using the formal system itself.


> An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it.

That's only true if you make the program answer by following the rules of some logic that contains the principle of explosion. Not all systems of logic are like that. A computer could use fuzzy logic. It could use a system we haven't thought of yet.

You're imposing constraints on how a computer should operate, and at the same time allowing humans to "think" without similar constraints. If you do that, you don't need Godel's theorem to show that a human is more capable than a computer -- you just built computers that way.


I’m not imposing any constraints - the point is that inconsistent formulations are not interesting or relevant to the argument no matter what system of rules you look at. This has nothing to do with any particular formalism. I think the difficulty here is that words like completeness and inconsistency have very specific meanings in the context of formal logic - which do not match their use in everyday discussion.


I think we're talking past each other at this point. You seem to have brushed past without acknowledging my point about systems without the principle of explosion, and I'm afraid I must have missed one or more points you tried to make along the way, because what you're saying doesn't make much sense to me anymore.

This is probably a good point to close the discussion -- I'm thankful for the cordial talk, even if we ultimately couldn't reach common ground.


Yes! I think this medium isn’t helpful for understanding here but it’s always pleasant to disagree while remaining civil. It doesn’t help that I’m trying to reply on my phone (I’m traveling at moment) - in an environment which isn’t conducive to subtle understanding. All the best to you!


> We can of course plug any holes we find by adding more rules but full coverage will always evade us.

So if we assume that clever software can automate the process of plugging this holes. Is it then like the human mind? Are their still holes that can not be plugged not due to lack of cleverness in the software but due to limitations of the hardware sometimes called the substrate?

> The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.

If computers are limited by their substrate though it seems like humans might be limited by their substrate too, though the limits might be different.


Yes I think this is one way to attack the argument but you have to break the circularity somehow. Many of the dismissals of the Hofstadter/Penrose argument I’ve read here, I think, do not appreciate the actual argument.


Penrose is claiming there is new physics which is not computable, but to my knowledge Penrose offers no experimental evidence for it.

> 11:43 but new physics of a particular kind what I'm claiming from the girdle argument you see this is the plot which

> 11:50 I think has got lost what I claim is saying that the physics that in is

> 11:57 involved in conscious thinking has to be non-computable physics now the physics

> 12:02 we know there's a little bit of a glitch here because it's not completely clear

>12:07 but as far as we can see the physics we know is computable you see uh what about general

link for 11:43: https://youtu.be/biUfMZ2dts8?si=Epe3gmfCzwhj_g41

Without Penrose giving solid evidence people making counter arguments tend to get dismissive then sloppy. Why put in the time to make well tuned arguments filled with evidence when the other side does not bother after all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: