there is a way that "predicting the next token" is ~append-only turing machine. Obviously the tokens we're using might be suboptimal for whatever goalpost "agi" is at any given time, but the structure/strategies of LLMs is probably not far from a really good one, modulo refactoring for efficiency like MAMBA (but still doing token stream prediction, esp. during inference)