If it is running on a computer/Turing machine, then it is effectively a rule-based program. There might be multiple steps and layers of abstraction until you get to the rules/axioms, but they exist. The fact they are a statistical machine, intuitively proves this, because - statistical, it needs to apply the rules of statistics, and machine - it needs to apply the rules of a computing machine.
The program - yes, it is a rule-based program. But the reasoning and logic responses are not implemented explicitly as code, they are supported by the network and encoded in the weights of the model.
That's why I see it as not bounded by computability: LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word.