This was my question. There's a weird sort of self-cannibalism that this hints at. The LLM is only as good as it is because it's been able to train on existing SO answers. But if over time, SO content production declines, then the LLM results will be less reliable. It seems that a new equilibrium could be one in which -- for newer questions/concerns -- both SO and LLMs will be worse than they are now.