It is hard to quantify, but subjectively (and certainly in terms of public perception), each GPT release has been a massive leap over the previous model. Maybe GPT-2 to GPT-3 was the largest, but im not sure how you're judging that a field is stagnating based on one improvement in a series of revolutionary improvements being slightly more significant than the others. I think most would agree the jump from GPT-3 to GPT-4 was not marginal, and I think i'll be borne right when the jump from GPT-4 to GPT-5 isn't either. There may be a wall, but i dont't see a good argument that we've hit it yet. If GPT-5 releases and is only marginally better that will be evidence in that direction, but i'm pretty confident that won't happen.
Your analogy is odd because you're just posing a situation that is analgous to what the situation would look like if you turned out to be right. From the rate of improvement recently, i'd say we're more at the first flight test stage. Yes, of course the jump from a vehicle that can't fly to one that can is in some sense a 'bigger leap' than others in the development cycle, but we still eventually got to the moon.
I hope you're right, because it would be far more entertaining. More realistically, if you look at people's past predictions of the future, well.. You already know. AI people in the 60s also thought AGI was just around the corner, especially when they started playing chess and other games. Maybe we're not better than them at predicting things, but every generation thinks they're right.
Your analogy is odd because you're just posing a situation that is analgous to what the situation would look like if you turned out to be right. From the rate of improvement recently, i'd say we're more at the first flight test stage. Yes, of course the jump from a vehicle that can't fly to one that can is in some sense a 'bigger leap' than others in the development cycle, but we still eventually got to the moon.