Given how, for some topics—especially about itself or AI generally—ChatGPT seems to really get stuck in a rut, I suspect there's a lot of manually-written processing and categorization and massaging that's happening before (and, probably, after) the AI-proper gets ahold of prompts. I further suspect they audit responses to find topics where it comes up with shitty ones, and then fix those with the same kind of manual process.
I'd love to know how much of its seeming smart is because it's just that good and how much is because humans directly intervene, on some level, in how it responds. Is it that much better than other efforts, purely from machine learning, or has it just had tons and tons more human hours put into tweaking it than others have? How much of it's "real" AI and how much is good ol' hand-written decision trees?
That makes it vulnerable to disruption.