AlphaGo is not merely a brute force calculator and neither is GPT-3. They have conceptual representations. That is the advantage of deep neural networks, the ability to form concepts, as biological neural networks do.
They can form their own internal vector* representations, but they are still fixed architecture. Training AlphaGo can't change it from an RL architecture to something else, nor can training GPT-3 change it from a transformer into a more general cognitive architecture such as human brain.
The datapath through a transformer is entirely prescribed. Once the weights are trained, feeding a test sample into in will indeed just result in a fixed series of calculations to produce an output.
* It's a bit of a stretch to assert that deep neural nets are creating conceptual representations. For example if you look at the early layers of a CNN, what it's learned are just primitive (orientated edge, etc) feature detectors. At higher levels the more primitive features are combined into more complex ones, but they are just visual patterns, not concepts. It'll be the same for AlphaGo - it'll be creating it's own representations of complex board positions. Better than Deep Blue having to work with a human-design board representation, but at the end of the day it is nothing more than a board/position representation.