Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I get where you are coming from, when observing people around me they just want a query answered and don't care. We will see, I don't think the answer is a well curated wikipedia article because that's just too much information. The AI use-cases so far involve that you have a question you want an answer, the rest I think can be server by good old search, which will still exist.

All the benefits of deep learning (natural language understanding, context, engaging answers) make this ergonomically possible and not able to be replicated using symbolic AI. It's frankly irrelevant to the discussion right now and will at best imho only play some part in the overall process. If attributable sources are important, I think it's way likelier that a deep learning approach will be used. But I think for most queries answered by an AI this will not be important since they will be trivial and I expect for the rest you switch to the normal search behaviour to be able to trust your source.



In a larger sense, this is true, for now. But it's not a reasonable shortcut to supply information that is presented as correct but is in fact partially made up. Either there will be a well deserved public backlash, or in a pretty real way, reality will be irrecoverably warped.

It's pretty hard to imagine any answer that is useful, no matter how trivial, if it's wrong 20% of the time. Deep Learning is an amazing parlour trick that can be useful in the right circumstances, which basically means an expert has to be present to interpret its results.

I'm not convinced having a fully padded conversation is what people want. I find it annoying. Google was on the right path with snippets, they just need to make them easier to consume and emphasize the fact the search can be continued. I think Google is being very conservative, partially for safety reasons, but also because they won't take chances if they don't have to.

Of course, all these searches would be much more useful if they had more intimate knowledge of the searcher, but that's another dangerous path.


> It's pretty hard to imagine any answer that is useful, no matter how trivial, if it's wrong 20% of the time

But that means that the error is independent of the search term. That's not my experience. Of course you can't trust it 100% but I think it's pretty good for easy searches to a degree that I would trust it to be correct. You have to get used to the tool. If the accuracy is good enough for easy search terms for people to get some use out of it, then we have a product I think. Of course it's probabilistic but even wikipedia is wrong sometimes. These things won't be correct, it's always an approximation, but that's also a reality of live. Information is inherently unreliable and so are our tools.

Just view those tools as you would view humans. They help, they can be wrong, and if you need confidence you have to get some sources. At least that's how I would view them.

Wether this all works out is another question. But it's an approach different to search as it exists and may complement it. People thought Siri would revolutionise everything and then it didn't.


I agree with what you are staying, but any time it's presented as more than a research tool, maybe useful for the most trivial answers, it's a problem. It should be emphasized that it is based on sources and helps scour and fake-summarize them, but despite Microsoft calling it a "copilot for the web," that's not how it's being received. It would simply be better if every statement it made were referenced.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: