Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of my searches aren't really searches, they're questions which Google has the information to be able to sum it up. A few weeks ago I was trying to remember the name of a site that put up two movie posters and you pick which one you liked more.

Googled "What was the website that showed two movie posters and you picked the one you liked more?" and I got links to reddit, lots to letterboxd, some quora, and a lot more, all irrelevant to my question.

Asked ChatGPT that same question verbatim and

> The website you're referring to is probably "Flickchart." It's a platform where users can compare and rank movies by choosing which one they like more between two movie posters or movie titles. Please note that my knowledge is up to date as of January 2022, and the availability and popularity of such websites may change over time.

Another time I was looking for the release dates of 8 and 16-bit consoles. With Google I had to search for each console individually, sometimes offered a card with the release date, sometimes didn't and I'd have to go do more digging.

So I asked ChatGPT and got a nice formatted list with dates



Very similar to your second example, I've been using ChatGPT to calculate the total calories in a recipe. Before, I would consult a database like the USDA food database for nutrition info and do measurement conversion manually. I've tried asking ChatGPT for the same recipe and the total amount was within 1% of what I calculated. It runs into a couple snags I've noticed though.

First, it always gives a calorie count for cooked meat, but it should assume the meat is uncooked since I said it was for a recipe.

Second, it seems to struggle with the concept of uncooked rice. If you ask it to work with 1 "rice cooker cup" of rice, it refuses because that isn't a standard measurement. If you swap in the converted standard measurement (3/4 cup), it still is way off. It told me 3/4 cup uncooked rice is about 150 calories when cooked. That's a third of what the USDA database gives. When you point out that 3/4 cup uncooked rice is a large serving after being cooked, it changes its answer to 375 calories, still about half of what the USDA database gives. But this is fine for me because rice is not typically part of my recipes since it doesn't usually require special preparation.

Overall it reduces a 10 minute task to 10 seconds, but you need to know enough about the ingredients to spot obvious problems in its result. In my case I could see the calories given for meat was way too high, and way too low for rice. It gave a better answer after telling it to fix the former and ignore the latter.

I tried a second recipe and the total it gave was 2% under my calculation, but I did not see any obvious error in its result so I could not correct it further.

It is unfortunate that you kind of have to trust the numbers are correct, but this is no different than the nutrition details on sites like MyFitnessPal which are often wrong when you closely examine it.


Wait. 150 is one third of the USDA answer, and 375 is half of the USDA answer? What is the USDA answer?

This equation is beyond my paygrade!

Edit: I asked our GPT3.5 bot to solve this, and it hallucinated "pulling up the USDA database" ; complete with a "(a few moments later...)" message before giving me 160 calories as the USDA answer.

I asked the same bot (identical prompts) with GPT-4-Turbo enabled and it went through "step by step" to say the correct answer is 461 calories because 1/3 cup uncooked is 1 cup cooked, so 1 rice cooker cup (160g) = 3/4 cup uncooked, so 2.25 cooked * 205 = 461 cal.

Is that the right answer? If so, 375 seems far from "half"


The database gives 716 calories for 1 cup uncooked rice. So, 537 calories for 3/4 cup. It was a mistake to say 375 was half of 537, my intention was to point out that it's not close.

https://fdc.nal.usda.gov/fdc-app.html#/food-details/168881/n...

I don't have access to GPT-4 but your results are interesting. I don't understand why it's trying to approximate the yield of the uncooked rice after cooking before giving a calorie estimate. From my testing it doesn't do this for other ingredients like raw chicken.

It seems to be more accurate when giving calorie estimates for a list of ingredients instead of a single ingredient. For example, in a recipe that includes "boneless skinless chicken thighs (1 lb, raw)" ChatGPT gives "about 880 calories" as the answer. But in a new prompt if you ask it only about the chicken and nothing else, it gives "between 1100 and 1300 calories." The USDA database shows 144 calories per 100 grams, so 653 calories per pound. The first answer by ChatGPT is much more accurate.

I have no idea why this is the case, and I bet this would be easy for a researcher to demonstrate if true. Alas, I don't have API access.


You should probably consider most calorie calculations to be +-50% anyway due to huge differences between human digestion and the way calories are estimated in a lab.


Google is kind of becoming a "System for double-checking if GPT is hallucinating".

IMO Google should convert their search box to a Bard chat input, and you get a hybrid of Bard conversation with real links from their search engine.

It's actually astounding that, in the face of rapid GPT rise, that search box is still an old-school search box, looking dumber and less attractive each day.


That's Bing Copilot and it's still not that impressive, it's just search engine with Clippy then.

Google can't change for now, in doing so they undermine all the AdWords accounts, the real customers to Google, paying six figures to stay on top of SERPs.

But the other competitors can and will.

So it goes


Could it be that it's a scale problem?

How many queries per second does OpenAI get vs Google search?


Good point. Something else that occurs to me is that Google may be between a rock and a hard place with their advertisers. Currently advertisers pay a lot of money for top of the page "sponsored" results, and there's probably not an immediately straightforward solution to integrating Bard while maintaining the visibility of those ads.


Even if they integrate ads well, Simple fact that Bard is costlier to run will hurt Google. If Search Engine profits go down by 60%, MSFT and OpenAI can still pursue it but not Google.


Let ads die! They have virtually no lift over organic search anyhow.


It feels like Google was able to enshitify their results by loading with ads and going for longer 'engagement' times because it used to be worth it to persist with Google until you got their best answers.

They optimised for profit. Now what?

If they address the competition then they have to undo that optimisation and cut into their own profits.

My guess is they won't settle for merely extortionate profits, they won't address the competition fully and so people will flip to using LLM-first search.

I'm hoping that in a year or so we're asking why Google went from being the top search on Bing to being only one of many - principally LLM-based - tools people are using for getting answers (most of what 'search' represents now). IMO Google could do with knocking down a peg or two.


It’s true. “Making the worlds knowledge accessible” is in tension with driving traffic to ads.

It would be astonishing if a company can succeed at scale with charging subscriptions for search and AI services, instead of ads. Google would be truly doomed.


> Google is kind of becoming a "System for double-checking if GPT is hallucinating".

If you have to double-check with Google, why not just start with Google and skip ChatGPT altogether?


Because instead of spending minutes to hours collecting, sorting, filtering, parsing, and synthesizing Google results, chatgpt provides a near instantaneous answer that is often correct. It's then trivial to validate via Google.


Google is pretty good at short-answer questions (the occasional summary at the top can do a good job, and "people also ask" section can be pretty good) but if you ask something like "What are all the tax implications of moving to France?", those "smart" sections tend to be useless, and then you get sent on a wild goose chase of going to websites of varying quality.

On the other hand GPT will give you a really good summary of all the issues, personalized to your financial situation, and you can dig into details with followup questions. Then if you want to verify a particular tidbit you can go to Google.


Google is slowly adding a lot of small AI features though, for example recently they added these automatic "topic" search suggestion pills, or extended their auto-generated Q&A sections. The latter are hilarious, because whenever you expand a question, new questions are generated, ad absurdum if you want :)


Microsoft Bing have same mistake, if I used your browser for chat, why you force me search first:(


The first is maybe a good example of where an LLM integrated search engine makes sense. Because "compare two movies" has flickchart as the third option for me. But it is nowhere to be seen for "compare two movie posters" which is how I read your search originally (and indeed flickchart is about ranking movies not the posters themselves ... Obviously).

Anyways an LLM clearly teased that out whereas if you misremember or misread something a straight search is going to be bad.

Most of my searches are the opposite. I was to know about an obscure movie from the 80s with a toy helicopter. Google very neatly suggests Defense Play (correct) but most LLMs I've tried end up just suggesting very popular films with a helicopter and it ends up being quite difficult to get it to give me information about obscure stuff. Also with that same search the LLM suggests a bunch of incorrect films since (and I figured this out later) it turns out that it was all sourced from a single forum thread from the 90s where a bunch of the posts suggested movies that don't have toy helicopters in them. Go figure.


Thanks, this gave some good insight to GPT4. If I provide the entire Wikipedia page contents but blank out the movie name and director name it can't recall it. https://chat.openai.com/share/c499e163-3745-48c3-b00e-11ea42...

However, if I add the director it gets it right: https://chat.openai.com/share/a602b3b0-5c17-4b4d-bed8-124197...

If I only give it 1980s film and the director's name it can still get it. https://chat.openai.com/share/d6cf396b-3199-4c80-84b9-d41d23...

So it's clearly not able to look this movie up semantically and needs a strong key like the director's name.

EDIT: Digging deeper it's clear the model only has a very foggy idea of what the movie is about: https://chat.openai.com/share/d0701f53-1250-421e-aa4b-dc8156... People have described these types of outputs as the text equivalent of a highly compressed JPEG, which seems to fit well with what's going on here. It gets some of the top level elements right and kind of remembers there's some kind of vehicle that's important but it has forgotten all the details, even the date the movie was released. But unlike a human who might indicate their fuzziness (was it a helicopter or a submarine?), GPT4 gladly pretends like it knows what it's talking about. I think it's likely a solvable problem, the model probably has the information to know when it's confident and when it's in a fuzzy JPEG region but the current alignment isn't doing a great job of surfacing that.


"compare two movies" on Google doesn't even have flickchart for me, and rewording my question to not involve posters also doesn't have flickchart in the results.

So, get as pedantic as you'd like, but Google really dropped the ball. ChatGPT gave flickchart with both questions.


These (I call them) "reverse-dictionary" searches are absolutely amazing on ChatGPT! I've asked it several times things like "what's the word you use to describe a situation that's <like this> but slightly <like that>?" and ChatGPT consistently "gets" what I'm looking for. What a time to be alive.


I did this the other day. For some reason, I was trying to think of a word for nationalism a-holes and it was something like Django, and I thought, is it Jengoism, but that didn't sound right...

So I asked ChatGPT: "There’s a word that sounds kinda like Django but it’s like jingoistic"

Turns out I was right, it's "jingoistic", but then I could go on and ask:

"Give me the definition of the word, then."

"The term "jingoistic" refers to a form of extreme and aggressive patriotism, often characterized by a belligerent foreign policy. It denotes a belief in the superiority and righteousness of one's own country, sometimes leading to the denigration of other nations and the advocacy of aggressive political and military actions."

"Where does the word come from?"

"The origin of the word "jingoistic" traces back to a popular British song from the 1870s. The song, with the refrain "We don't want to fight, but by Jingo if we do, We've got the ships, we've got the men, and got the money too," was sung during the Russo-Turkish War (1877-1878). It expressed support for the British government's aggressive stance towards Russia.

The phrase "by Jingo" in the song is believed to be a minced oath, a euphemistic expression used to avoid swearing. "Jingo" here stands in for a stronger word, likely a reference to God. Over time, the term "jingoism" evolved to describe an aggressive, belligerent form of patriotism and foreign policy, and "jingoistic" is the adjective form used to describe someone who exhibits this attitude."

Fascinating! For anyone who is interested in learning, there hasn't been a tool like ChatGPT since the internet itself!


Important note here - ask for the definition in a separate thread, otherwise if it hallucinated the word it will still keep hallucinating the definition.


on GPT4?


Great idea.

Me: “staircase wit” is a very charming phrase. What are some other unexpected phrases that might also be fun to know?

ChatGPT: <I originally included the response, but you can ask it yourself if you care to. I feel this was a spur-of-the-moment idea and, rereading it, I feel it’s just like Googling something and posting the result. Not really a high quality contribution on my part.>


> So I asked ChatGPT and got a nice formatted list with dates

I might be an outlier here, but to me this wouldn't be useful at all. I wouldn't trust ChatGPT to get it right, so I'd go to wikipedia to double check, at which point the amount of effort saved is little to zero.


I should have mentioned I was only looking for loose dates, and am also to the point where I don't trust the information Google offers in its cards, as it frequently doesn't relate to what I'm searching for.


> I was looking for the release dates of 8 and 16-bit consoles

this is a case where search has taken a step backward. The old Google would have worked for a simple search like that, "NES (or sega, whatever) + 'release date' " and simply return the best results that had those two parameters in them. Today we can't have that because they make more money intentionally fuzzing your search parameters so you accidentally click on sponsored content.

I think we're going to see a lot more of this: renewed excitement and enthusiasm when A.I. "discovers" things that plain old imperative algorithms figured out 20 years ago.


> What was the website that showed two movie posters and you picked the one you liked more?

Google Bard now answers this with the first suggestion being Flickchart

I also got a clean list of release dates for the console question: https://g.co/bard/share/ceb0eac6c69f


I asked Bard, verbatim. Its first response was that there were many sites that fit that description, and it asked me to narrow it down. Since I didn't have any idea how to do that, I just asked it to list them. It listed 4 (top was Flickchart), but also notes there are many more websites like that.

https://g.co/bard/share/b58043d8aca0


How do you determine whether or not ChatGPT just made up whatever answer it gives you?


How do you determine if the webpage Google gave you is honest? Most people don't seem to care.

Phind provides references, problem is as the webpages used to feed LLMs become written by LLMs then we're going to be up to our necks in even more [subtly] wrong information than the currently very widely peddled disinformation from advertisers and political groups.


> How do you determine if the webpage Google gave you is honest? Most people don't seem to care.

That's the thing that surprises me the most about these "How do you know ChatGPT is correct?" questions. They seem to expose the fact that a lot of people were taking whatever they found on the internet (sites they found on Google, Wiki pages, Reddit comments, etc.) at face value without checking into it further. Most things people find on the internet are potentially inaccurate and need to be double checked.


Google gives me sources for answers and I can evaluate them to determine whether the answer is correct. Those sources are often forums, where a discussion about a topic from many different sources may occur. Often, I am checking to see whether several different sources are giving the same answer. Google enables me to exercise media literacy. When ChatGPT gives me an answer, I have no idea where the information came from, nor do I have any tools to evaluate the answer.


yeah it's really good for those kinds of "i'm trying to remember a thing here's a bunch of random details i remember about it what is it?" kinds of queries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: