This action comes a bit late, at the end of the "Search engine" era, at a time when AI responses from many sources are largely replacing the "Google Search".
Similar action happened against Microsoft Windows around 2000, just as the rise of web-based apps (online email, google docs, etc) largely made the underlying operating system less relevant to how people use their computers and apps.
So I read this as the dominant player can monopolize a market while it's relevant without an issue, and once the market starts to move on, the antitrust lawsuits come in to "make the market more competitive" at a time when that era is mostly over.
And trying to regulate early (as with the last administration's AI legislation that is now being repealed) we can see that only hindsight is 20/20, and regulating too early can kill a market. My conclusion is to just let the best product win, and even dominate for a while as this is part of the market cycle, and when a better product/platform comes along the market will move to it.
Because we're still in the "get them hooked" stage where AI startups and Google/Meta are losing money on AI.
Once they start properly monetising (aka making users pay for what it actually costs them to train and run), it will be a different story. The vast majority of people won't pay $20-30€/month for an LLM to replace their search engine. (And an analysis I saw of OpenAI's business model and financial indicated they're losing money even on their paid tier, per query).
They’ll instead accept being manipulated and fed content influenced by money, be it ads or individualized content that somehow serves whoever is paying for it.
IMO the AI players are only hooking the power users and technically inclined (in regard to AI replacing search), but there's a long, long tail of people who are going to be using Google as their search engine until the end of days. Think the type of person who types "facebook.com" into Google instead of their address bar — they're not switching over to ChatGPT or Kagi any time soon.
I’ve started using ChatGPT to look up most things I used to Google. It gives me immediate, concise results without having to parse bloated websites full of ads and other garbage.
Google themselves are trying to figure this out, with the first (top placement) of search results showing their Gemini AI Response, at least for me. I read this as an attempt to keep users on Google instead of asking Chat GPT or other some other AI. What's your take on that?
My take is that it's primarily a smart way to quickly gather lots of ELO/AB feedback about LLM responses for training, whilst also reducing people switching to ChatGPT. OpenAI has significant first mover advantage here and it's why they're so worried about distillation, becuase it threatens the moat.
Google, on the other hand, has a huge moat in access to probably the best search index available and existing infrastructure built around it. Not to mention the integration with Workspace emails, docs, photos - all sorts of things that can be used for training. But what they (presumably) lack is the feedback-derived data that OpenAI has had from the start.
ChatGPT does not use search grounding by default and the issues there are obvious. Both Gemini and ChatGPT make similar errors even with grounding but you would expect that to get better over time. It's an open research question as to what knowledge should be innate (in the weights) and what should be "queryable", but I do think the future will be an improved version of "intelligence" + "data lookup".
Every AI chatbot to date suffers from the "Film expert" effect. That is when a script writer presents data from an "expert" in a movie or show to the audience in response to some information need on the part of the other characters. Writers are really good at making it sound credible. When an audience member experiences this interchange, generally they have one of two experiences. Either they know nothing about the subject (or the subject is made up like warp drive nacelle engineering) and they nod along at the response and factor it into their understanding of the story being told. Or, they do know a lot about the subject and the glaring inaccuracies jolt them out of the story temporarily as the suspension of disbelief is damaged.
LLMs write in an authoritative way because that is how the material they have been trained on writes. Because there is no "there" there, an LLM has no way of evaluating the accuracy of the answer it just generated. People who "search" using an LLM in order to get information about a topic have a better than even chance of getting something that is completely false, and if they don't have the foundation to recognize its false may incorporate that false information into their world view. That becomes a major embarrassment later when they regurgitate that information, or act on that information in some way, that comes back to bite them.
Gemini has many examples of things it has presented authoritatively that are stochastic noise. The current fun game is to ask "Why do people say ..." and create some stupid thing like "Too many cats spoil the soup." That generates an authoritative sounding answer from Gemini that is just stupid. Gemini can't say "I don't know, I've never seen anything that says people say that."
As companies push these things out into more and places, more and more people will get the experience of believing something false because some LLM told them it was true. And like that friend of yours who always has an answer for every question you ask, but you keep finding out a bunch of them are just bullshit, you will rely on that information less and less. And eventually, like your buddy with all the answers, you stop asking them questions you actually want the answer too.
I'm not down on "LLMs" per se, but I do not believe there is any evidence that they can be relied on for anything. The only thing I have seen, so far, that they can do well is help someone struggling with a blank page get started. That's because more people than not struggle with starting from a blank page but have no trouble correcting something that is wrong, or re-writing it into something.
"Search" is multifaceted. Blekko found a great use case for reference librarians. They would have paid Blekko to provide them an index of primary sources that they could use. The other great use is shopping if you can pair it with your social network. (Something Blekko suggested to Facebook but Zuck was really blind to that one) Blekko had a demo where you could say, "Audi car dealer" and it would give you the results ranked by your friend's ratings on their service. I spent a lot of time at Blekko denying access to the index by criminals who were searching for vulnerable WordPress plugins or e-commerce shopping carts. Chat GPT is never going to give you a list of all sites on the Internet running a vulnerable version of Wordpress :-).
So my take is the LLM isn't a replacement for search and efforts to make it so will stagnate and die leaving "Search Classic" to take up the slack.
If you trained a model on a well vetted corpus and gave it the tools to say it didn't know, I could see it as being a better "textbook" then a physical textbook. But it still needs to know what it doesn't know.
What happens when search engines start displaying AI generated/human seasoned content? Everything written online is being seasoned by ChatGPT et al. and human behavior is being conditioned to think by its highly convincing outputs.
Why should I trust an ad supported search engine with the bastardized search experience displaying top rank results by highest paying advertiser and now mixing in AI slop pretending to be human written for the foreseeable future. The future looks bleak..
I rather create an AI agent to do my Google search for me and cut out the ad links results bias and further synthesize the results for me in human readable and interrogabal format.
I will take that bet Chuck. Maybe not completely replace, but AI will be the defacto search platform. I find myself using Google less and less these days
I only use Google when I am in the mood to search for something and find it below the fold after scrolling through mindless semi-related sponsored links
> This action comes a bit late, at the end of the "Search engine" era, at a time when AI responses from many sources are largely replacing the "Google Search".
AIs are using search indices more and more. Google has the largest, and there is risk of Google using its monopoly in search (in particular their index) to give themselves an unfair advantage in the nascent AI market.
Do you think its a coincidence that companies start getting regulated when they are at the end of their rope? I think its the other way around. When Google started to lose power, when alternatives started to pour money into the political system, when jobs and money from alternatives presented themselves, then the wheels of justice began turning. The regulation is caused by googles apparent decline.
Similar action happened against Microsoft Windows around 2000, just as the rise of web-based apps (online email, google docs, etc) largely made the underlying operating system less relevant to how people use their computers and apps.
So I read this as the dominant player can monopolize a market while it's relevant without an issue, and once the market starts to move on, the antitrust lawsuits come in to "make the market more competitive" at a time when that era is mostly over.
And trying to regulate early (as with the last administration's AI legislation that is now being repealed) we can see that only hindsight is 20/20, and regulating too early can kill a market. My conclusion is to just let the best product win, and even dominate for a while as this is part of the market cycle, and when a better product/platform comes along the market will move to it.