A quote from the link you posted: "Then, when you refer to “Lambda”, “ChatGPT”, “Bard”, or “Claude” then, it’s not the model weights that you are referring to. It’s the dataset."
Yet all four of these examples use the same model architecture (transformers).
We don't know that. No one has demonstrated it. It's very likely that at larger scales, for a given amount of compute you cannot train a traditional RNN to be as a good as a transformer.
We are saying the same thing. Transformers are more compute efficient than RNNs. Nobody is denying that but the switch from RNNs didn't precede some performance wall(i.e it's not like we were training bigger RNNs that weren't getting better).
We use Transformers today in large part because they got rid of recursion and in effect could massively parallelize compute.
Yet all four of these examples use the same model architecture (transformers).