Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unrelated to the article but I ran across an interesting result recently that is related to AI (and the hype surrounding it): Let A be an agent and let S be a Turing machine deductive system. If A correctly understands S, then A can correctly deduce that S does not have A’s capability for solving halting problems. [1]

One way to interpret this is that all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs of floating point operations and this fact is both easy to state and to prove in any formal system that can express the logic of contemporary AI models. So no matter how much Ray Kurzweil might hope, we are still very far from the singularity.

1: https://link.springer.com/article/10.1007/s11023-014-9349-3



Your argument is wrong or miscommunicated. The AI itself can figure only some of the halting problem results, not all, plus it can make a mistake. It is not an oracle.

Recursive neural networks are not necessarily halting when executed in arbitrary precision arithmetic.


Have you read the referenced article?


I’ve read a fair portion of it now (at least, a fair portions as far as reading on one’s phone goes), but not all of it yet, and while I do think it seems to probably be making some interesting points, I do feel like it is taking a fair bit of time to get to the point?

Maybe that’s partly due to the expectations of the medium, but I feel like it could be much more concise to, instead of saying, “here is a non-rigorous version of an argument which was previously presented at [other location] (note that while we believe the argument as we are presenting it is essentially the same as it was presented, we are making this different choice of terminology). Now, here is a reason why that argument is insufficiently rigorous, and a way in which that lack of rigor could be used to prove too much. Now, here is a more rigorous valid version of the argument (with various additional side tangents)”, one could instead say “Let objects X,Y,Z satisfy properties P,Q,R. Then, see this argument. Now, to see what this is applicable to the situations that it is meant to represent, [blah].”


Ya, it could be more concise but I think that would require more prerequisites from the reader in terms of model theory and formal logic.


> all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs

No they aren't. Think about LSTMs for example.


So how do you get a value out of an LSTM?


Run it for as long as you want and look at the output state.

How do you the a value out of a person?


So then in the statement of the theorem the agent A can determine how many cycles the unit will run before halting, correct?


By determine do you mean predict or choose?


I mean by looking at the source code for the neural network someone can give an upper bound on how many steps will be required before the entire network halts and gives an answer and they can prove that their upper bound is really an upper bound.


They need to know the input length too. With an infinite input it will never halt.


Yes, that's the usual assumption when working with Turing machines and proofs. But I guess you could also allow infinite inputs and it wouldn't make that much difference, e.g. computing exp(x) for some real x as input.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: