DeepSeek seems to go the way of trying to please everybody. They offer two alternatives. which you could use separately or both in the same time (named in an obvious way): DeepThink and ... well ... Search :)
Your circles might have a little more technical literacy than most. I'm working part time in retail at a hardware store currently and the amount of people who come in looking for parts specified exclusively by a single AI overview is mindboggling. People repairing car engines come in looking for bolts with specific lengths, materials, and thread pitches that AI told them they needed. I haven't had anyone come back and explicitly tell me that AI led them wrong, but I'm sure they've had to make multiple trips back out here.
The shift towards LLM-based search products is significant as they offer more conversational and personalized responses compared to traditional search engines. This change is driven by users seeking quicker, more relevant answers and a better overall experience
So it must be true, right? Coz that's the only thing I searched for. I got my answer. Why would I search for the opposite? My bias was confirmed. I'm happy and will repeat the results to all my friends, who will search for the same thing to confirm and get confirmation!
Yes it's meant sarcastically. Personally I agree with you that you want to look at pros and cons, both sides.
But there are many, oh very very many people out there who would literally do exactly what I wrote. They do it all the time. That's why we have these echo chambers everywhere. And AI as we can see here is not making it better.
And it's not just the "general population". I see it on a technical level too at work. Developers just trusting the AI output. It sounded confident when it said it found the root cause, fixed the bug and added tests. So it's good to commit, right?
While I still often just search with Kagi, I have found it often easier to write a fullblown natural language question into Kagi Assistant, to query an LLM, which then replies and gives me the references, where it supposedly found that info. If the reply is weird, I can click through to the references and check that out.
Isn't traditional search going to have the same issue? If you search about how chocolate is good for you, you'll turn up plenty of sites willing to confirm your beliefs, AI summary or not.