BTW: I think of this like asking someone to put things into their own words, and then it’s easier for them to remember. Matching on your way of talking can be weird from the LLM’s point of view, so use their point of view!
It is two different language models. The embedding model tries to capture too many irrelevant aspects of the prompt that ends up putting it close to seemingly random documents. Inverting the question into the LLM’s blind guess and distilling it down to keywords causes the embedding to be very sparse and specific. A popular strategy has been to invert the documents into questions during initial embedding, but I think that is a performance hack that still suffers from sentence prompts being bad vector indexes.
My heuristic is how much noise is in the closest vectors. Even if the top k matches seem good, if the following noise has practically identical distance scores, it is going to fail a lot in practice. Ideally you could calculate some constant threshold so that everything closer is relevant and everything further is irrelevant.