Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs (our current "AI") doesn't use logical or mathematical rules to reason, so I don't see how Gödel's theorem would have any meaning there. They are not a rule-based program that would have to abide by non-computability - they are non-exact statistical machines. Penrose even mentions that he hasn't studied them, and doesn't exactly know how they work, so I don't think there's much substance here.


Despite the appearance, they do: despite the training, neurons, transformers and all, ultimately it is a program running in a turing machine.


Well, if you break everything down to the lowest level of how the brain works, then so do humans. But I think there's a relevant higher level of abstraction in which it isn't -- it's probabilistic and as much intuition as anything else.


But it is only a program computing numbers. The code itself has nothing to do with the reasoning capabilities of the model.


Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.

Perhaps you can explain your point in a different way?

Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.

You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.

Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.


> Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.

Not in the way that would apply problem of non-computability of Turing machine.

> Perhaps you can explain your point in a different way?

LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word. The model code does not solve a (let's say) NP problem to find solution to a puzzle, the only thing is doing is finding next best possible word through statistical models built on top of neural networks.

This is why I think Gödel's theorem doesn't apply here, as the LLM does not encode strict and correct logical or mathematical theorem, that would be incomplete.

> Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.

I agree with you, though I had different angle in mind.

> You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers. > Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.

Thank you, that's food for thought.


Pick a model, a seed, a temperature and fix some floating-point annoyances and the output is a deterministic algorithm from the input.


A lot of people look towards non-determanism to be a source for free will. It's often what underlies peoples thinking when they discount the ability of AI to be conscious. They want to believe they have free will and consider determinism to be incompatible with free will.

Events are either caused, or uncaused. Either can be causes. Caused events happen because of the cause. Uncaused events are by definition random. If you can detect any real pattern in an event you can infer that it was caused by something.

Relying on decision making by randomness over reasons does not seem to be a good basis of free will.

If we have free will it will be in spite of non-determinism, not because of it.


That's true with any neural network or ML model. Pick a few points, use the same algorithm with the same hyperparameters and random seed, and you'll end up with the same result. Determinism doesn't mean that the "logic" or "reason" is an effect of the algorithm doing the computations.


The logic or reason is emergent in the combinations of the activations of different artificial neurons, no?


Yes, exactly. What I meant is: it's not the code itself that encodes this logic or reasoning.


Maybe consciousness is just what lives in the floating-point annoyances


Not really possible. The models work fine once you fix them, it's just making sure you account for batching and concurrency's effect on how floating point gives very (very) slightly different answers based on ordering and grouping and etc.


> LLMs (our current "AI") doesn't use logical or mathematical rules to reason.

I'm not sure I can follow... what exactly is decoding/encoding if not using logical and mathematical rules?


Good point, I meant the reasoning is not encoded like a logical or mathematical rules. All the neural networks and related parts rely on e.g. matrix multiplication which works by mathematical rules, but the models won't answer your questions based on pre-recorded logical statements, like "apple is red".


If it is running on a computer/Turing machine, then it is effectively a rule-based program. There might be multiple steps and layers of abstraction until you get to the rules/axioms, but they exist. The fact they are a statistical machine, intuitively proves this, because - statistical, it needs to apply the rules of statistics, and machine - it needs to apply the rules of a computing machine.


The program - yes, it is a rule-based program. But the reasoning and logic responses are not implemented explicitly as code, they are supported by the network and encoded in the weights of the model.

That's why I see it as not bounded by computability: LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: