> I can pretty much verify the answer just by reading it
Only if you have domain knowledge. In both of your examples, you have to 1) know geography to determine whether "Técolum" and "Tolum" are indeed city names or just made up; and 2) know what might be acceptable ("good idea") or not at a birthday party.
Yes, it'll probably save you some time, but it's not orders of magnitude.
> In other cases, clicking through to check that a linked source supports a claim
this supposes that the AI provides a link for every fact. Google search + Gemini does, but most LLM interfaces don't.
secondly, if I have to click through every link and read through the source to determine whether details of a "summary" are correct or not, that really does not save me much time from conducting a search and looking through the linked sources myself
Anecdote from a couple of weeks ago. My wife's professor sent her 5 citations and summaries related to a medical research project. She didn't say they were LLM generated, but it was obvious (to me, not my wife) they were, by the formatting alone. None of the 5 papers existed as cited. My wife was confused, spent a lot of time trying to figure out what was wrong and why she couldn't find any of the papers. A Google Scholar search turned up 2 of the papers which were close enough to the citation to be the ones with some logical thinking, but the other 3 were not even matchable. In the end, the time spent trying to sort out valid vs invalid citations, and find valid replacements, was significantly greater than just doing the search and looking through the abstracts.
PS: LLMs are fine for information that can be "fuzzy": suggest places to go on vacation in September, plan a birthday party, etc. But I wouldn't consider that to be a "revolutionary" advance.
It's common to have a reasonable intuitive sense for whether something works as a birthday party yet be stumped when coming up with ideas. Or be able to see that a word ends in "um" and is a real word/place you recognise (or double click -> search if not) without necessarily being able to list many yourself if asked. I don't mean to say that verification requires absolutely zero knowledge, just that it can be (and often is) substantially easier, so I don't think insane_dreamer's reasoning holds.
> this supposes that the AI provides a link for every fact.
For andrewmutz's LLM, it was the statement "the user has the ability to easily double check the results whenever they like" that was suggested made it unnecessary in the first place.
Outside of that case, people have the choice to use the LLM that best suits their task - and most popular ones I'm aware of do support search/RAG.
Certainly possible to waste time by doing something like what your wife's professor seemingly did (get non-link "citations" generated by an LLM without search/RAG, then send them to someone who'll probably infer "these must exist somewhere because the sender read them" opposed to "these were vaguely recalled from memory so may not exist") - I don't recommend doing that.
> secondly, if I have to click through every link and read through the source to determine whether details of a "summary" are correct or not, that really does not save me much time from conducting a search and looking through the linked sources myself
A lot of LLM responses are for the kind of thing that doesn't need verification, or for which verification doesn't depend on checking the source. For situations where checking the source is relevant, that's typically just going to be the source for the part you're interested in - in the same way a Wikipedia article can provide a useful lead without needing to check every source the article cites. Anecdotally I find that, while far from perfect, it saves a lot of time when it can surface information that would've otherwise required digging through a dozen or so sources.
(Later addendum: just noticed now that for some reason I had thought you were a different user to the initial comment I had replied to. The "so I don't think insane_dreamer's reasoning holds" was meant to refer to your previous comment)
Only if you have domain knowledge. In both of your examples, you have to 1) know geography to determine whether "Técolum" and "Tolum" are indeed city names or just made up; and 2) know what might be acceptable ("good idea") or not at a birthday party.
Yes, it'll probably save you some time, but it's not orders of magnitude.
> In other cases, clicking through to check that a linked source supports a claim
this supposes that the AI provides a link for every fact. Google search + Gemini does, but most LLM interfaces don't.
secondly, if I have to click through every link and read through the source to determine whether details of a "summary" are correct or not, that really does not save me much time from conducting a search and looking through the linked sources myself
Anecdote from a couple of weeks ago. My wife's professor sent her 5 citations and summaries related to a medical research project. She didn't say they were LLM generated, but it was obvious (to me, not my wife) they were, by the formatting alone. None of the 5 papers existed as cited. My wife was confused, spent a lot of time trying to figure out what was wrong and why she couldn't find any of the papers. A Google Scholar search turned up 2 of the papers which were close enough to the citation to be the ones with some logical thinking, but the other 3 were not even matchable. In the end, the time spent trying to sort out valid vs invalid citations, and find valid replacements, was significantly greater than just doing the search and looking through the abstracts.
PS: LLMs are fine for information that can be "fuzzy": suggest places to go on vacation in September, plan a birthday party, etc. But I wouldn't consider that to be a "revolutionary" advance.