Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, the exact stuff Google used to be good at.


The exact stuff I now use Kagi for. Finding obscure relevant PDFs that Google didn't is literally one of the things that made me switch.


Pretty much, though Google got bad at these things well before LLMs really came on to the scene, and we can all debate which project manager was responsible and the month and year things took a downward turn, but the IMO obvious catalyst was that "Barely Good Enough" search creates more ad impressions, especially when virtually all of the bad results you are serving are links to sites that also serve Google managed ads.


Oh, sure, Google was starting to take a dive almost a decade before LLMs came on the scene.


The main reason Google doesn't find good search results anymore is there are no good search results anymore because there are no websites anymore. You can't do it much better.


Right, Google definitely isn't helping themselves IMO,

but the reasons search got hard was that it became profitable to become the "winner" of a search query. It's a hostile market that works to actively undermine you.

AI absolutely will have the same problem if it "takes over" except the websites that win and get your views will not look like blogspam, they will look like (and be) the result of adversarial machine learning.


It was a very clear point: when Amit Singhal was kicked out for sexual harassment in the me too era. He was the heart of search quality but he went too far when he was drinking.


Apple is the only firm that seems to do a good job in preventing itself from falling prey to what leads to the demise of every corp in history.


Nope. I'm talking about the stuff keywords are no good at, and which Google Scholar doesn't tend to surface because it's just not cited much or it's from a different niche.

The fact that LLM's understand your question semantically, not just with keyword matching, is huge.


Another win for big tech: Google has been enshittified to such a point that you can now spin up a machine that consumes 1000x the power to give you a result that has a coin toss odds of being totally made up.


That's nothing! Next gen will use the entire power output of a small nation for a week, to tell you a nice cake recipe.


A search query probably uses about 10x more electricity than a matching LLM query. There's enough wiggle-room depending on the assumptions that they might be about even. There is no way search uses 1/1000th of an LLM.


What questions are you asking LLMs where they're wrong 50% of the time?


People love gambling.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: