No. This is a demo of directly prompting a model using voice-to-text.
If your model has a long enough context (models like MistralLite can do 32,000 tokens now, which is about 30 pages of text) you could run a PDF text extraction tool and then dump that text into the model context and ask questions about it with the remaining tokens.
You could also plug this into one of the ask-questions-of-a-long-document-via-embedding-search tools.
say I have a research paper in pdf, can I ask llama questions about it?