Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We use the term "pre-googling" for this sort of "information retrieval". You might have some concept in your head and you want to know the exact term for it, once you get the term you're looking for from LLM you'll move to Google and search the "facts".

This might be a weird example for native english speakers but recently I just couldn't remember the term for graph where you're allowed to move in one direction and cannot do loops. LLM gave me the answer (directed acyclic graph or DAG)right away. Once I got the term I was looking for I moved on to Google search.

Same "pre-googling" works if you don't know if some concept exits.



> graph where you're allowed to move in one direction and cannot do loop

To be fair, you didn't need LLM for this. Googling that, the answer (DAG) is in the title of the first Google result.

(Not to invalidate your point, but the example has to be more obscure than that for this strategy to be useful)


I recently started watching fallout and it reminded me of a book I read about a future religious order which was piecing together pre-bomb scientific knowledge. It immediately pointed me to the Canticles of Leibovitz (which is great btw). Google results will do the same, but llm I’d much faster and more direct. I find it great for stuff like this - where you know there is an answer and will recognise it as soon as you see it. I genuinely think it can become an extension of my long-term memory, but I’m slightly nervous about the effect it will have on my actual non-memory if I just don’t need to remember stuff like this anymore!


The pre-Googling is an excellent idea. You are augmenting the query, not generating nonsense answers. My wife uses ChatGPT as a thesaurus quite a lot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: