I don’t think that parses with the current architecture of GPT. There is no “knowledge database”, just parameter weights.
See the Toolformer paper for an extension of the system to call external APIs, or the LaMDA paper for another approach to fact checking (they have a second layer atop the language model that spots “fact type” utterances, makes queries to verify them, and replaces utterances if they need to be corrected).
It’s plausible that Bing is adding a separate LaMDA style fact check layer, but retraining the whole model seems less likely? (Expensive to do continually). Not an expert though.
Imagine freezing the 'language' part of the model but continuing to update the knowledge database. Approaches like RETRO make this very explicit.