I'm aware of that, but I don't think this immediately lends itself to a prognosis of the form "LLMs and AlphaGo are deep learning neural networks running on GPUs; AlphaGo was tremendously successful in chess => LLMs will soon surpass humans".
I can consider the possibility that something coming out of GPU-based neural networks might one day surpass humans in intelligence, but I also believe there's reason to doubt it will be based on today's LLM architecture.
Maybe the connection the GP saw was in terms using their other instances for training. It is not exactly the same process, but there seems to be a hint of similarity to my - sadly - untrained eye.