One one annoying thing about using LLMs to answer technical questions is when you provide context like “I have tried X, Y already, and Z won’t work because…” it does what LLMs do— uses all the words in the prompt to string together more words into an answer. It would make LLMs so much more intuitive to use for technical questions if this problem could be solved… sounds so obvious that maybe it does exist?