First of all, this wasn't a replacement for search, no search was claimed to have taken place. The screenshot from the complainant shows this was not in a search context.
Secondly, llms are daydream machines, we don't expect them to produce "truth" or "information". So the nytimes comparison feels very wrong.
Thirdly, this story is about a man who typed a text string into the daydream machine. The machine continued appending tokens to the inputted text to make it look like a sentence. That's what happened. Nothing to do with truth seeking or "protecting" big tech
There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc. Saying "we don't expect to say the truth" is a little bit too easy.
If everyone was not expecting them to say the truth or just being accurate, they shouldn't have been designed as programs that speak with such authority and probably wouldn't be the target of massive investments.
So yeah, in principle I may agree with you, but in the socio-technical context in which LLMs are being developed, the argument simply does not work in my opinion.
>There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc.
Who are you referring to? Did someone tell you that chatgpt "works for search" without clicking the "search" box?
Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
Also, you seem to be arguing that, because the general tone you've been hearing about AI is that "they work for search", that therefore openai should be liable for generative content. However, what you've been hearing about the general tone of discussion doesn't really match 1:1 with any company's claim about how their product works
Just an example, read https://openai.com/index/introducing-chatgpt-search/ , see how many mentions there are to "better information", "relevant", " high quality". Then see how many mentions there are of "we don't expect it to be real stuff".
> Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
If designers wanted it any other way, they would have changed their software. If those who develop the software are not responsible for its behavior, who is? Technology is not neutral. The way AI communicates (e.g., all the humanizing language like "sorry", " you are right" etc.) is their responsibility.
In general, it is painfully obvious that none of the companies publishing LLMs paints a picture of their tools as "they are dream machines". This narrative is completely the opposite of what is needed to gather immense funding, because nobody would otherwise spend hundreds of billions for a dream machine. The point is creating a hype in which LLMs can do humans jobs, and that means them being right - and maybe doing "some" mistakes every now and then.
All you need is to go on openai website and read around. See https://openai.com/index/my-dog-the-math-tutor/ or https://openai.com/chatgpt/education/ just as a start.
Who would want a "research assistant" that is a "dream machine"? Which engineering department would use something "not expected to say real stuff" to assist in designing?
>The screenshot from the complainant shows this was not in a search context.
Of course it does. The question shown in the screenshot is "who is Arve Hjalmar Holmen?". That's something someone would type into Google search, it's not "write me a fictional story about a man called Arve Hjalmar Holmen".
People use these systems like search tools, they're sold and advertised as information retrieval systems, literally what else would be their point for 90% of people, they're starting to be baked into search products and in return web search is itself included in the AI systems, etc. The top post on HN right now is Claude announcing:
"Instead of finding search results yourself, Claude processes and delivers relevant sources in a conversational format."
What are you gonna tell me next, the bong on your table is really a vase?
>The screenshot from the complainant shows this was not in a search context.
>Of course it does
No, of course it doesn't. Because there's a specific blue button for conducting web searches in chatgpt. And other visual indicators which are not present here.
So when I said "the screenshot shows", I was referring to things we could verify in the image, namely, that the question was not asked within a search context.
The top post you refer to, about Claude, is specifically about the search context which wasn't present here.
> The question shown in the screenshot is "who is Arve Hjalmar Holmen?". That's something someone would type into Google search, it's not "write me a fictional story about a man called Arve Hjalmar Holmen
Llms are daydream machines.
If you open a new window with an llm and ask it "what is ..." or "who is...", then you'll often get a constant-looking but completely false answer. Because that's how llms work. You can ask it, who or what is something you just made up, and it will trip over itself hallucinating sources that prove it
First of all, this wasn't a replacement for search, no search was claimed to have taken place. The screenshot from the complainant shows this was not in a search context.
Secondly, llms are daydream machines, we don't expect them to produce "truth" or "information". So the nytimes comparison feels very wrong.
Thirdly, this story is about a man who typed a text string into the daydream machine. The machine continued appending tokens to the inputted text to make it look like a sentence. That's what happened. Nothing to do with truth seeking or "protecting" big tech