Google's damned if they do and damned if the don't here:
- If they don't make search AI centric, they're going to get lapped by AI-first competitors like Perplexity, OpenAI, etc. We saw many people here predict Google's pending demise from this.
- If they do make search centric, they're unfairly consuming they world's content and hoarding the user traffic to themselves.
Since no reasonable company is just going to stand by and willing let itself be obsoleted, Google's obviously going to go for option 2. But had they for some reason stood down, then they would have been supplanted by an AI competitor and the headline would read "News Sites Are Getting Crushed by Perplexity" - just a few years later.
Seems to my untrained eyes like Google's AI search is actually the best on the market, no? Seems like a lot of HN users have trained themselves not to type queries into the search prompt anymore and then complain about the quality of a product they don't use.
Every once in a while I bother not ignoring a Google AI overview, then I waste some time fact-checking it and find out it's wrong. Most recently about a python library (where it hallucinated a function that doesn't exist, complete with documentation and usage examples) and breaking news (where it authoritatively said [non-culture war, non-controversial, local] thing doesn't happen, above a dog-bites-man story from a conventional news source about how thing happened again)
My experience with Gemini in AI studio mirrors what the AI overview shows. An hallucinated libraries and their internal reasoning dialogue reinforcing the hallucination and saying "the user don't know how to search on pipy".
Pretty much what I said, no? You don't use the product and when you do, do it through a filter[1] where you only remember the bugs. Do you use other AI search products and find that they don't show this behavior?
[1] I mean, come on: framing it as "bother not ignoring" is just a dead giveaway that you aren't doing a product evaluation in good faith!
I am doing the review in good faith though--by default, I scroll past to the first result, then if it seems unsuitable and I'm desperate enough I check the LLM thingy. If it were providing any value it would sometimes be both novel and correct.
Usually it's non-novel (correctly-harmelessly but unhelpfully, restating the web search results). When it's novel it's because it's wrong.
I would remember the situation where reading the LLM thingy added any value if it ever happened. The weird little UI thing they do where they only show the LLM result if you wait for it to render makes this very easy, I have to scroll up to even consider it.
People are now changing how they search and gather information to use AI. You're automatically discounting that person's experience because they avoid using AI for valid reasons. A lot of times Google AI is outright wrong, not even close to correct. It makes sense to not rely on it and only going back to see if it's improved.
I don't "ignore AI". I just haven't changed how I search and gather information because it's clearly not accurate yet. I still have to fact check it which negates the benefits(time saving). Occasionally I go back to see if it's improved and usually it hasn't. That's not bias.
The model that's doing AI Summary for search results - that presumably needs to be fast and cheap because of the scale - is still sufficiently bad as to give people a bad taste. Presumably they're frantically working to scale their better models for this use case. If you could get Gemini Pro on every search result the experience would be effectively perfect (in the sense of better error rates than what a non-specialist educated human reading the top results and summarizing them would achieve). That's years away from a scaling/cost/speed perspective.
One funny thing about Google summaries is “copy text” merges all of the links into a giant blob which gets interpreted as a single extremely long broken link. Not a great sign for attention to detail if they don’t even copy their own pasta (it’s been like this for months)
In the US (to start) there's now a flavor of Gemini 2.5 to power Search experiences like AI Mode and AI Overviews. Should be sufficiently good at this point.
Possibly their AI search - I don't know, I switched to Kagi to get a search engine that actually did what I asked instead of just trying to put as many ads in front of me as it could.
The paywall. I'm not against paying, but their free tier limitations are way too low.
The UI is just horrible and a huge waste of space. I had to use a user style to make it a non-headache to use.
Kagi routinely lacks results that every other search engine I use get no problem. I can't give any examples but I found myself going to page 5+ more often on Kagi than any other service.
If the paywall bothers you, then Kagi just isn't for you, the whole point of it is to have a business model where the users are customers, not the product.
I have no problem with the UI. It's snappy, clean, doesn't have a bunch of cruft, it's easy to find results, etc. It feels like Google of yore.
Same thing with the search results, I find them really excellent. In my experience, there's much less SEO spam. If you search for a programming language function, you get links to the proper documentation, not some trash "geeksforgeeks" site or whatever. I can't remember if I've ever had to go past page one.
Obviously YMMV on that, it's entirely possible it's different depending on what you're searching for. But I'm very happy paying for Kagi, and will continue to do so.
Having reluctantly used both, Bing's Copilot seems a lot more grounded on current search results below it versus Google's Gemini seems a lot more likely to conduct its own searches from a different query than what was asked, so also a lot more likely to hallucinate things or to provide answers that seem way different from the rest of the search page.
In terms of "best on the market" for AI search, I know that I am much more likely to trust the one that seems more like a direct summary of the stuff the search engine is traditionally responding with (and presumably has been well tuned in the last several decades) versus the one more likely to make stuff up or to leave the realm of what you are actually asking for some other search it thinks is better for you.
Though admittedly that's a very personal judgment call; some people want the assistant to search for "what they really mean" rather than "what they asked for". It's also a lot of gut vibes from how these AIs write about their research and some of that can be hallucinations and lies and "prompt optimization" as much or more than any sort of "best on the market" criteria.
Against the biggest corporations? Sure, I'd much rather fulfill the role of overly critical teenager then defensive protective parent when it comes to rapacious companies who deserve none of our support.
For anything important I always ask LLMs for links and follow them. I think this will probably just create a strong incentive to cover important things and move away from clickbait.
It's probably a win for everyone in the long run although it means news sites will have to change. That's going to be painful for them but it's not like they're angels either.
I'm surprised the links work for you at all. 90+% of citations for non trivial information (i.e. not in a text book but definitely in the literature) I've gotten from LLMs have been very convincing hallucinations. The year and journal volume will match, the author will be someone who plausibly would have written on the topic, but the articles don't exist and never did. It's a tremendous waste of time and energy compared to old fashioned library search tools.
One thing I did once with great success was asking chatgpt something like "I'm trying to find information about X, but when I Google it I just get results about the app named after X. Can you suggest a better query?"
X was some tehnical thing I didn't know a lot about so it gave me some more words to narrow down the query that I would not have known about myself. And that really helped me find the information I needed.
With the huge usage that LLM APIs are getting in all sorts of industries, they cannot be going away, and they're cheap.
If consumer AI chatbots get enshittified, you can just grab some open source bring-your-api-keys front-end, and chat away for peanuts without ads or anything anti-user.
They could simply restore the search quality they had in 2010. No one wants these "AI" summaries except for people looking to get promoted for "having an impact" inside Google.
What Google is doing right now is sabotage the search moat they do have. They are throwing it all away because of some "AI" rainmakers inside the company.
That's impossible unless the web reverted back to 2010, when walled gardens weren't prevalent, making your own blog was common, doable and often done by those without programming experience, forums were alive and well, and people wanted to share things on the web rather than group chats.
There are plenty of blogs, plenty of obvious low quality spam to block, plenty of features to enable allowlist and blocklists. To think for a second that the Google search experience couldn't be made significantly better at the snap of a finger by Google is to live in a fantasy world.
Sure, sure, except for this minor issue that the argument I was responding to didn't mention revenue, they talked about the state of the internet. So why again are you responding to my counter with a straw man?
they are losing more and more search to “AI.” my 12-year old never uses Google and couple of times I asked her to “Google it” she literally rolled on the floor laughing and called me a “boomer” :)
We are getting to watch The Innovator's Dilemma play out, yet again. The downward trajectory of Google's utility has only been worsening over the past 10 years-- but only in the last 3-4 have mainstream audiences started to notice.
The first part of that statement is valid but the second one isn't.
If anything, most of big tech has shown exceptional humility against new threats
Instagram incorporating stories (Snapchat)
YouTube incorporating Shorts (tiktok)
Google search incorporating AI Mode (perplexity et al)
This is in stark contrast to Kodak and the likes who scoffed at digital camera and phone cameras as distraction. They were sure that their ways were superior, ultimately leading to their demise.
Maybe you misunderstood the scope that Google is a search advertising company first and foremost? Alphabet ignores (yes, they essentially invented transformers, etc.., but actual productive efforts likely correlate to predicted TAM or protecting status quo, answering to shareholders while waiting to acquire threats) a market that will eventually usurp their cash cow of first party search ads, because the new market isn't initially as lucrative due to market size. There is also the consideration of cannibalizing their high margin search ads market with an error prone and resource intensive tech that cannot immediately be monetized in a second price auction (both from inventory and bidder participant perspectives). A $10 billion market for Google would be under 3% of revenue, but if the market grows 10x, it is much more attractive, but now the incumbent may be trailing the nascent companies who refined their offerings (without risk of cannibalizing their own offerings) while said market was growing. We are currently at the stage where Google is incorporating Gemini responses and alienating publishers (by not sending monetizable clicks while using their content) while still focusing on monetization via their traditional ad products elsewhere on the SERPs (text search ads, shopping ads). Keep in mind, they also control 3rd party display ads via DoubleClick and Adsense-- but inventory on 3rd party sites will drop and Google will lose their 30%+ cut if users don't leave the SERPs.
Dozens of major news publications have covered the decline of Google's organic search quality decline and emphasis on monetization (ignoring incorrect infoboxes and AI generated answers). See articles such as https://www.theatlantic.com/technology/archive/2023/09/googl... and a collection even posted here on HN https://news.ycombinator.com/item?id=30348460 . This has played into reasons why people have shifted away from Google. Their results are focused solely on maximizing Google's earnings per mille, as leaked (https://www.wsj.com/tech/u-s-urges-breakup-of-google-ad-busi...) where the ads team has guanxi over search quality. Once Amit Singhal and Matt Cutts left their roles, the focus on monetization over useful SERPs becomes much more evident.
I never really bought the idea of any AI company killing Google. They have too much momentum to really be seriously impacted, too many people who only use them exclusively and will continue to do so their whole lives on the name brand alone. They might risk a lack of "growth" but that only really matters to shareholders, not to end users.
Yahoo had 90%+ of the search market and they lost it in a few years to google because they were unable to innovate. I don't think anyone saw that coming. Everyone was building "portals" (remember those? AOL.com? I think verizon.com was one at some point with news and weather) to try and compete with Yahoo's dominance in search. It can happen again. LLM Chat is certainly an existential threat to the googs. Part of the OpenAI lore is that google originally viewed it as a threat to their search/advertising revenue model and defunded it on that basis.
The fact that people are willing to pay for LLM and use it over search seems to indicate that Google's free product isn't as good, and llm chat is better "Enough" that people are willing to pay for it.
The major flaw in your argument is that Yahoo is still around. They still have tons of traffic, some of the most in the world, just behind Reddit. They are not constantly growing, yes, but that is exactly my point. They have a satisfied userbase who will use them for life as does Google. Neither is going anywhere any time soon. Both make billions of dollars annually.
Yahoo's peak (inflation adjusted value was ~$325 billion and the last sale of their assets was $4.5 billion in 2017 which is about 1% of it's peak value. I can't imagine google investors wanting to risk 99% of their investment.
It's funny most people are saying Google will win the AI wars, though that is precisely what will cannibalize their current business model, which had a much bigger moat than frontier LLMs, apparently.
You think we wont start seeing ads or paid for refs/links in those AI responses? Not defending Google here, when they turned that feature on I posted to some friends "another nail in the coffin for the web as we know it" or something to that effect.
Eventually open models will be able to do the same, so why would anyone use ad-ridden service? The first LLM provider who turns on ads on their responses will disappear in a brink.
It will take a long time until an average person has the resources to run models of similar quality (and speed) as Google and OpenAI can provide today.
Is this really that bad for Google? Do Perplexity and OpenAI use paid SERP API under the hood? Google doesn't have to make money from ads on search, if its paid search.
Spare us the "woe is me" for they literally invented replacing the publishers. Yesterday its infoboxes, today its shitty AI summaries. Which is still the case, so good riddance.
- If they don't make search AI centric, they're going to get lapped by AI-first competitors like Perplexity, OpenAI, etc. We saw many people here predict Google's pending demise from this. - If they do make search centric, they're unfairly consuming they world's content and hoarding the user traffic to themselves.
Since no reasonable company is just going to stand by and willing let itself be obsoleted, Google's obviously going to go for option 2. But had they for some reason stood down, then they would have been supplanted by an AI competitor and the headline would read "News Sites Are Getting Crushed by Perplexity" - just a few years later.