The scenarios you described are not realistic. There's no AI getting stuck into a loop by a tricky question, not even GPT-3. Any solution would take into consideration its computation cost. AlphaGO for example would evaluate 50K board states per move, it won't go into a 3^361 recursion.
In general when the problem is so hard evolutionary methods are suitable. They naturally blend the notion of cost with that of search and cope better with deceptive objectives.
Your turn of phrase "there's no AI getting stuck..." has me stuck. Are you
assuming that there exist "AIs", as in artificially intelligent entities, like
the kind imagined by science fiction writers and (some) AI researchers, alike?
To clarify, there are no such systems. "AI" is the name of a research field, not
any ability that characterises a class of systems currently known.
This suffices to explain why there is, indeed "no AI getting stuck inot a loop
by a tricky question". Because there is "no AI" at all, certainly not of the
kind that can understand a "tricky question" sufficiently well to stumble on the
paradox inside it.
For example, GPT-3 has no ability to process "this sentence is false" in such a
way as to decide its truth. AlphaGo is not capable of processing language at
all, it is only capable of playing board games and it isn't even capable of
playing board games by reasoning, only by search. AlphaGo searches a game tree
structured as a directed graph, without loops so it's hard to see how it could
get stuck on recursive paradoxes anyway.
In general, such systems as exist today do not have the mathematical properties
of the formal systems described by Gödel, Church and Turing. They don't even
have memories. So they are, let's say immune to incompleteness, because they're
not even incomplete.
In general when the problem is so hard evolutionary methods are suitable. They naturally blend the notion of cost with that of search and cope better with deceptive objectives.