Unrelated to the article but I ran across an interesting result recently that is related to AI (and the hype surrounding it): Let A be an agent and let S be a Turing machine deductive system. If A correctly understands S, then A can correctly deduce that S does not have A’s capability for solving halting problems. [1]
One way to interpret this is that all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs of floating point operations and this fact is both easy to state and to prove in any formal system that can express the logic of contemporary AI models. So no matter how much Ray Kurzweil might hope, we are still very far from the singularity.
Your argument is wrong or miscommunicated. The AI itself can figure only some of the halting problem results, not all, plus it can make a mistake. It is not an oracle.
Recursive neural networks are not necessarily halting when executed in arbitrary precision arithmetic.
I’ve read a fair portion of it now (at least, a fair portions as far as reading on one’s phone goes), but not all of it yet, and while I do think it seems to probably be making some interesting points, I do feel like it is taking a fair bit of time to get to the point?
Maybe that’s partly due to the expectations of the medium, but I feel like it could be much more concise to, instead of saying, “here is a non-rigorous version of an argument which was previously presented at [other location] (note that while we believe the argument as we are presenting it is essentially the same as it was presented, we are making this different choice of terminology). Now, here is a reason why that argument is insufficiently rigorous, and a way in which that lack of rigor could be used to prove too much. Now, here is a more rigorous valid version of the argument (with various additional side tangents)”,
one could instead say “Let objects X,Y,Z satisfy properties P,Q,R. Then, see this argument.
Now, to see what this is applicable to the situations that it is meant to represent, [blah].”
I mean by looking at the source code for the neural network someone can give an upper bound on how many steps will be required before the entire network halts and gives an answer and they can prove that their upper bound is really an upper bound.
Yes, that's the usual assumption when working with Turing machines and proofs. But I guess you could also allow infinite inputs and it wouldn't make that much difference, e.g. computing exp(x) for some real x as input.
One way to interpret this is that all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs of floating point operations and this fact is both easy to state and to prove in any formal system that can express the logic of contemporary AI models. So no matter how much Ray Kurzweil might hope, we are still very far from the singularity.
1: https://link.springer.com/article/10.1007/s11023-014-9349-3