That a knowably correct program cant simulate human reasoning is basically Godel's theorem. One can use a diagonalization argument similar to Godel's proof for programs which try to deciding which Turing machines halt. Given a program P which is partially applicable, but always correct, we can use diagonalization to construct P' a more widely applicable and correct program ie. P' can say that some Turing machine will not halt while P is undecided. P'. So, this doesn't involve any logic or formal systems, but it is more general - Godel's result is a special case as the fact that a Turing machine halts can be encoded as a theorem and provability in a formal system can be encoded as a Turing machine.
Penrose indeed, believes both in the stronger claim - a program can't simulate humans and the weaker claim, a knowably correct program can't simulate humans.
The weaker claim being unassailable firstly shows that most of the usual objections are not valid and secondly, it is hard to split the difference ie. to generate the output of HC using a program which is not knowably correct. A program whose binary is uninterpretably but by magic only generates true theorems. Current AI systems including LLMs don't even come close.
Penrose indeed, believes both in the stronger claim - a program can't simulate humans and the weaker claim, a knowably correct program can't simulate humans.
The weaker claim being unassailable firstly shows that most of the usual objections are not valid and secondly, it is hard to split the difference ie. to generate the output of HC using a program which is not knowably correct. A program whose binary is uninterpretably but by magic only generates true theorems. Current AI systems including LLMs don't even come close.