Gemini cites web references, NotebookLM cites references in your own material, and the Gemini APIs have features around citations and grounding in web search content. I'm not familiar with OpenAI or Anthropic's APIs but I imagine they do similar, although I don't think ChatGPT cites content.
All these are doing however is fact-checking and linking out to those fact-checking sources. They aren't extracting text verbatim from a database. You could probably get close with RAG techniques, but you still can't guarantee it in the same way that if you ask an LLM to exactly repeat your question back to you, you can't guarantee that it will verbatim.
Verbatim reproduction would be possible with some form of tool use, where rather than returning, say, a bible verse, the LLM returns some structure asking the orchestrator to run a tool that inserts a bible verse from a database.
Are there implementation where LLM just out put the text of the references (or the first 100 words). I'm sure someone has implemented already?