I thought the consensus was that Google search was awful and rarely produced a result to the question asked. I certainly get that a lot myself when using Google search.
I have also had ChatGPT outperform Google in some aspects, and faceplant on others. Myself, I don't trust any tool to hold an answer, and feel nobody should.
To me, the strange part of the whole thing is how much we forget that we talk to confident "wrong" people every single day. People are always confidently right about things they have no clue about.
From the prompt leakage it looks like it is allowed to initiate web searches and integrate/summarise the information from the results of that search. It also looks like it explicitly tells you when it has done a search.
I am left wondering then what information takes priority, if any.
It has 4 dates to choose from and 3 timeframes of information. A set of programming to counter people being malicious is also there to add to the party.
You do seem correct about the search thing as well, though I wonder how that works and which results it is using.
> I thought the consensus was that Google search was awful
Compared to what it was. Awful is DDG (which I still have as default but now I am banging g every single time since it is useless).
I also conducted a few comparative GPT assisted searches -- prompt asks gpt to craft optimal search queries -- and plugged in the results into various search engines. ChatGPT + Google gave the best results. I got basically the same poor results from Bing and DDG. Brave was 2nd place.
> People are always confidently right about things they have no clue about.
I'm going to get pedantic for a second and say that people are not ALWAYS confidently wrong about things they have no clue about. Perhaps they are OFTEN confidently wrong, but not ALWAYS.
And you know, I could be wrong here, but in my experience it's totally normal for people to say "I don't know" or to make it clear when they are guessing about something. And we as humans have heuristics that we can use to gauge when other humans are guessing or are confidently wrong.
The problem is ChatGPT very very rarely transmits any level of confidence other than "extremely confident" which makes it much harder to gauge than when people are "confidently wrong."
I think the issue here is ChatGPT is behaving like a child that was not taught to say "I don't know". I don't know is a learned behavior and not all people do this. Like on sales calls where someone's trying to push a product I've seen the salepeople confabulate bullshit rather than simply saying "I can find out for you, let me write that down".
The deeper issue is that ChatGPT cannot accurately determine whether it "knows" something or not.
If its training data includes rants by flat-earthers, then it may "know" that the earth is flat (in addition to "knowing" that it is round).
ChatGPT does not have a single, consistent model of the world. It has a bulk of training data that may be ample in one area, deficient in another, and strongly self-contradictory in a third.
Well, I think you are right - ChatGPT should learn to say "I don't know". Keep in mind that generating BS is also a learned behavior. The salesperson probably learned that it is a technique that can help make sales.
The key IMO is that it's easier to tell when a human is doing it than when ChatGPT is doing it.
> I think the issue here is ChatGPT is behaving like a child that was not taught to say "I don't know". I don't know is a learned behavior and not all people do this.
Even in humans, this "pretending to know" type of bullshit - however irritating and trust destroying - is motivated to a large extent by an underlying insecurity about appearing unknowledgeable. Unless the bullshitter is also some kind of sociopath - that insecurity is at least genuinely felt. Being aware of that is what can allow us to feel empathy for people bullshitting even when we know they are doing it (like the salespeople from the play Glengarry Glen Ross).
Can we really say that ChatGPT is motivated by anything like that sort of insecurity? I don't think so. It's just compelled to fill in bytes, with extremely erroneous information if needed (try asking it for driving directions). If we are going to draw analogies to human behavior (a dubious thing, but oh well), its traits seem more sociopathic to me.
>> People are always confidently right about things they have no clue about.
>I'm going to get pedantic for a second and say that people are not ALWAYS confidently wrong about things they have no clue about. Perhaps they are OFTEN confidently wrong, but not ALWAYS.
I delight at interacting with chatbots, and I'm OK using them even though I know they frequently make things up.
I don't want my search engine to make things up, ever.