This was my question. There's a weird sort of self-cannibalism that this hints at. The LLM is only as good as it is because it's been able to train on existing SO answers. But if over time, SO content production declines, then the LLM results will be less reliable. It seems that a new equilibrium could be one in which -- for newer questions/concerns -- both SO and LLMs will be worse than they are now.
To add a bit more nuance, SO has a question-answer type format, which leads very well into prompt-rely format to train these chat applications. Most of the other sources do not, except for Github issues maybe. Without this question-answer format, there'll be a need for a bigger data labeling effort to train LLMs on new stuff, no?