Google already has many similar models. Most similar is maybe their LaMDA model. https://blog.google/technology/ai/lamda/
It's strange that the article does not mention that.
I never really understood the argument that OpenAI has some technology that Google does not have. That's just not true. The opposite is much more true. And Google's LaMDA is even older than ChatGPT.
Also, that technology is also not really so fancy. What matters is that you have enough computing power. When you have that, most people can replicate sth like ChatGPT.
And in terms of computing power, it's hard to beat Google and their TPU clusters. I don't really have numbers, but I think Google also wins here.
Google is an established brand. When ChatGPT makes up fake historical facts or has Opinions about Races, that's a cute quirk of the beta technology you were trying out. When a regular Google search does the same it is perceived differently. You can tell they've tried to be conservative with the AI-driven "People Also Ask" feature which quotes a snippet of source text, and I still see completely wrong answers every so often.
They also depend on distinguishing "real" content from spam in order to have pages to link to and train on, and with GPT we may see the conclusive defeat of spam detection. There are detectors that claim to detect ChatGPT, but I suspect motivated adversaries can defeat those by training their own models.
> You can tell they've tried to be conservative with the AI-driven "People Also Ask" feature which quotes a snippet of source text, and I still see completely wrong answers every so often.
All well and good but that doesn't explain why it wasn't Google that came up with Copilot?
Paul Graham predicted that a Google Search competitor would target hackers (a dinosaur egg he called it; and building Search in the image of Unix: "give you the right answers fast"), which is exactly how it is playing out with OpenAI + GitHub + Microsoft.
Because copilot is a copyright hellscape that litters codebases with landmines.
Anyone using it for company-owned code is relying on the cloned code never being discovered.
Google would have learned an expensive lesson from Oracle v Google. Even the smallest snippets of "copied/replicated/duplicated/parallel implementation" code are bad juju.
The model itself would probably be considered a transformative work, and Microsoft likely has a broad license to do whatever they want with code hosted on GitHub.
However, there is a reason the CoPilot instructions say that users shouldn't trust the results.
"These include [...] IP scanning"
We will learn from the trial whether or not it is possible to wash away copyright by sending it through an opaque, lossy, compression algorithm. If it turns out that people are free to use a model trained on _any_ source code it finds regardless of license?
Zowie, that would be a bigger change to the industry than if Oracle had won.
I believe, Google was asleep at the wheel. If you go back to last decade, copyright claims didn't stop Google from acquiring Android and YouTube.
Besides, a few days ago some in the media claimed that Google is already working on a Copilot competitor.
Copyright and license infringement is a valid claim, but personally, I feel it'd be a shame if AI couldn't be trained on publicly available sources (or Google Search couldn't index publicly available websites, or Google Maps couldn't map the world, etc).
The real issue starts even before that, Google search is still a cash cow, integrating some GPT tech can threaten their own economic model, they are in a tricky situation here.
Search is exactly how they are displaying ads though. Language models have the possibility to replace a very large portion of the classic search with a different interface, maybe 40%. How they are going to evolve their current business model facing this new reality is unknown.
Any tech initiative will be blocked by the executive side afraid of losing the cash cow.
Targeted, useful ads are so rare almost nobody has seen them. How can they push low quality ad inventory through a trusted AI? If they train the AI to trick people into bad purchases, it will be a disaster.
Google has been slow because they can’t figure out how to release this without destroying their business model.
It’s very hard to be commercially successful with an innovation when it attacks the cash cow side of your business. It can be done, but it creates a lot of internal conflict and incentives to wait, to gimp the innovation, or to try to bolt it on to the cash cow to mitigate the damage.
I have no doubt Google could be a leader in the LLM space. . . But are they willing to destroy their own moat and take a serious revenue hit to do so?
This is the same situation Kodak found themselves in years ago. Do you accept the business landscape has changed and accept a significantly smaller role in the market, or do you hope that somehow the alternative becomes unpalatable and you can keep your market position.
It’s the inventors dilemma: do they embrace new products outside of their historical core cash cow, or do they stick with the cash cow? Search and ads are where the history is, but empowering a million different ai powered products via an api is the future. While empowering ai products for third parties is a boon for the cloud business, running those api’s has a business model akin to “trading time for money” rather than directly getting a cut of revenue.
It seems less sexy to be a commodity provider of AI rather than the one that makes the products and getting a direct share of revenue rather than a share of cost. But I have heard that during the gold rush it was better to be a tool and infrastructure seller than to be a prospector.
The business landscape has not changed yet, it's still early stages and there are no competitors utilizing the tech to displace Google. However, one can now clearly see the threat and what might transpire in the next few years.
That's impossible. They must have asked this survey at a Silicon Valley networking event on AI or something. Rough search shows there are 63,644,000 "professional workers" in the US so this survey is saying 17,183,880 Americans are using ChatGPT at work. How can anyone seriously believe that stat?
what planet do you live on? I work in midtown manhattan, I bet if I walked down the street, 75% wouldn't even know what chatGPT is, let alone that the other 25% actually use it.
> 27% of professionals use chatGPT for work already.[0]
> found that 27 percent of professionals have used the program to help them with work-related tasks.
«Have used», as free test, at least once. Maybe twice. Tomorrow ? Until real value, habits and serious problems are established, my bracket of day to day for professional use is still opened between 26% - 1%.
Yeah, I personally find ChatGPT to be easily 1 to 2 orders of magnitude better than Copilot, specifically for generating code or transmuting code from one language into another language. Really powerful.
I don’t know the glue, but ChatGPT is dramatically better at understanding context and goals, and it has that whole refinement layer for iterating on responses.
There are also things chatgpt can’t/won’t do that GPT-3 will, like writing music.
But Google is investing in the research. They could become Kodak if they were distanced but it's not the case at this point. They just refuse to be the leader they could be for the moment.
Kodak had some of the most advanced and extensive photo research of anyone. They had early digital cameras but couldn’t let go of the film processing revenue (so they came up with the “advantix” failure).
Kodak had so much research that they pretty much survived on licensing their patent portfolio.
I don't think this is true at all. First you'd have to show that the new thing is useful in the same ways that the old thing is, and that the new thing has a viable business model.
I don't think ChatGPT has actually done that yet. Can they even charge enough to pay costs? If they do will enough people use it? And the questions about accuracy are material: if the LLM lies all the time, can it really replace a search engine? What do you do with a months old training set?
Figuring out how to combine search + LLM is the problem and OpenAI doesn't have a search engine.
Even if it sometimes lied, I would pay real money for a service that answered questions like "How do I cook basmati rice in an instant pot? Google has lost the plot on queries like this, and I don't have time to wade through the masses of bullshit it returns. I would guess that the average person would be in the same boat, preferring a simple answer that might be wrong to an endless stream of blog spam.
They're going to charge $42 a month which is a lot. That should cover the costs easily. It will put people off from using it for personal purposes though. Where I live in Spain that's a lot of money. About half my monthly energy bill (and I'm an IT enthusiast with 24/7 servers and a 3080Ti game PC)
Embedding ads is much more natural than displaying them. I don't think Google will allow you to know what part is payed, just like how product search moved to ads-only.
Google search isn't reliable either. Most of the results I get are either irrelevant or at least partly wrong (occasionally outright misinformation). But it's clear that the content comes from third parties without endorsement by Google so advertisers and users mostly accept it.
When Google deploys a large language model to directly synthesize content it will carry a greater reputational risk for them and their advertisers. What happens when an ad appears next to an AI answer that could be construed as racist or promoting medical misinformation? The legacy media already hates big tech and will spin it as a "gotcha" moment.
Remember microsoft putting win95 on the first tablets and making them pretty much unuseable. When its the cow keeping the lights on, everything grows teets.
>Google has been slow because they can’t figure out how to release this without destroying their business model.
I don't think that is the real explanation because even I can think of how to monetize it. Just have AdWords read your chat and advertise to you based on it. Far from a clever idea, it's exactly the way search already works!
there are a bunch of unclear questions about production readiness of this as-is.
where does chatgpt get its data? Does it just hijack the data from websites without referencing them? might work for a limited dataset (commercial or loose licences), but musk already said he won't let it use the Twitter data, for example. Google search is for mutual benefit to publishers that provide the data because it drives traffic. This thing doesn't drive anything, so if I as a publisher found my data in it, I would be getting ready to sue them.
there are other questions like, how to you continually retrain as you crawl the bigger web? on a really large number of sources, how do you rank and filter out crap spam from the actual useful, trainable content? so way too early to kill regular search imo.
> Does it just hijack the data from websites without referencing them?
How does one hijack data? Are you referring to scraping public websites that publish data for free? I wonder what the appropriate way to reference webscraping everything publicly available. I don’t think Google or other search engines do this, but I like the idea of listing out a massive set of trillions of URLs that have been included in the training along with a scrape date.
I’m not sure Google is for mutual benefit as they summarize site content and prevent visits to that site. Would be interesting to measure the pros vs the cons there. I know there have been lawsuits over the years but don’t know the resolution.
Google drives massive amounts of traffic to web publishers. It's carving out a bit for itself here and there with things like e.g. weather and answers to common questions, but they still drive a ton of traffic, and that's why (pretty much the only reason) the websites tolerate it.
Just because a text on a website is "public", it doesn't mean you can copy it and plug it into your product. Nothing is "free" by default, many publishers have a deal with Google what can and cannot be used by Google, specified by robots.txt or in search console.
Hmm no but offering that model as a service does have a strong possibility of infringing copyright. I think the legal complexity of this will have to be hashed out in court the next few years.
Exactly. In fact you will have much more context to go on. The user will not only tell you what they're searching for but often also why or other pieces of context like what they want to use it for.
They also didn't release Imagen which doesn't seem to pose any particular threat to any of their existing businesses. And doing a ChatGPT style demo would also not have posed any threat to their business for the same reason ChatGPT itself doesn't: you can make it available for a while, get loads of attention and then paywall it until you figure out a better approach.
The PR threat seems overblown. OpenAI isn't suffering any obvious reputational problems from making models available to play with, because people understand that AIs and their creators aren't the same thing.
Also, though OpenAI is mostly a collection of really cool tech demos that may or may not turn into a business, not releasing demos like them is a major strategic blunder for Google. If you're an ambitious AI researcher where do you want to work now - the place where your work will get put out there in front of millions of people and become an overnight sensation, again and again? Or the place where your work will be put in front of, at best, a bunch of Googlers, and the only public knowledge of its existence comes from leaks and a web page saying "we can do that too"? Unless Google makes a vastly better offer than OpenAI can, it's going to be OpenAI.
They've made this mistake before, with the cloud. Years ago, even before AWS existed at all, Borg and related tools were vastly superior to any publicly available cloud system. When I worked there we repeatedly asked management why that wasn't being turned into a business because it was so damn obvious that there was massive value in what had been built. The answer was always that Google could always make more money by keeping its tech private, because ads was such a great business model. That take ... didn't age well. They lost their chance to define what cloud meant and now GCP is lagging in third place.
Unfortunately for Larry and Sergey I think the reason they aren't keeping up with OpenAI is actually not business model related, which is why they're going to have a hard time fixing it. They actually told us why they aren't releasing demos already and there's no reason not to take them at face value: they believe AI is literally dangerous and "unsafe" to release.
It's been apparent for some time that there's some sort of weird internal purity spiral going on inside Google. Imagen is only available to employees yet, apparently, has been filtered so it refuses to draw people. If you want it to draw a person-like thing you have to ask it to draw robots. That's not a business model concern. The rationale appears to be some sort of DEI maximalism: if you asked it for people it might not draw the right kind of people, that would make Googlers/people racist, and so we have to block that. This reasoning seems largely unintelligible from the outside looking in, especially in parts of the world where these topics are less emotionally/historically charged. It comes from the kind of ultra-transitive worldview in which one person says something, or allows some words/images to appear somewhere, and someone else who may not even have been exposed to those words/images does something bad, and it's therefore the fault of the first person.
Google's business model was once summed up by the mission of making the world's information "universally accessible and useful". I'm not sure they still believe in that, or at least not enough of them do. If Larry and Sergey want to get their AI products out there they're not only going to have to tackle thorny business model issues, but also try to reset the culture to the old one. The one that looked towards an optimistic future in which advanced technology could empower everyone, regardless of who they are.
It's hard to look toward an optimistic future when, at least from some perspectives, the optimistic future that was predicted decades ago didn't arrive, and in some ways the proliferation of computing has made things worse. The techno-utopianism expressed in Steven Levy's _Hackers_, for instance, seems quaintly naive now.
It's easier to be optimistic in some areas, though. In my own field of accessibility, the rise of personal computers and digital communication has obviously done a lot of good, and I only wish it would go faster (e.g. no more paper).
>"The PR threat seems overblown. OpenAI isn't suffering any obvious reputational problems from making models available to play with, because people understand that AIs and their creators aren't the same thing."
Lol, no they don't. A week ago there was an article here about how chatGPT is woke.
The reputational hit or improvement (depending on your perspective) doesn't come from making an LLM available, but from the way they've been visibly altering its politics over time. They are altering it exactly because LLMs are not the same thing as their creators, so it started out by giving answers they didn't like. But I didn't see many people care about that (outside of maybe some US journalists, but they're not representative of the wider population). Obviously OpenAI care, but again, that doesn't seem to be driven by actual end user dissatisfaction. Most people just seem to think it's kind of amazing and if it says things that are daft or untrue or stereotyped, well, it's a machine so what do you expect?
>The reputational hit or improvement (depending on your perspective) doesn't come from making an LLM available, but from the way they've been visibly altering its politics over time.
According to you, someone that clearly has an axe to grind given your multiple ridiculous characterizations of the issue. I'm positive that OpenAI is doing better off making sure that chatGPT can't drop n bombs, rather than the alternative you are suggesting.
> Obviously OpenAI care, but again, that doesn't seem to be driven by actual end user dissatisfaction.
I didn't say "end user dissatisfaction" I said political blowback. They aren't the same. In fact, there's a good chance the blowback would come from the public at large, just by seeing what chatGPT said, rather than making chatGPT say it on their own. Frankly, I'm not sure why this needs to be explained to you as it should be painfully obvious to anyone that ever leaves their house and interacts with other people in the world in real life.
>Most people just seem to think it's kind of amazing and if it says things that are daft or untrue or stereotyped, well, it's a machine so what do you expect?
Most people? Most people most certainly do not think and behave that way. Maybe you mean most HN'ers? even then, that's incredibly generous. You seem like you are trying to backfill reality into your political grievances. Yuck!
ChatGPT's politics are those of a center-left liberal who is terrified of saying anything that would offend far-left progressives. Which I'd bet is a pretty accurate description of its creators.
It's much more easily explained as a business decision to minimize blowback but I wouldn't want to get in the way of your feelings regarding anyone that may or may not have blue hair.
Have you used LaMDA? Unsure if you've used the magical internal version, but the version that's available to the general public is terrible. I was so excited to get access to the AI Test Kitchen app, but even today, it strictly limits you to three narrow and useless scenarios, only lets you put in like 40 characters of input at a first, then subsequent inputs after the first are generated for you, and you can only select from a few available options (choose your own adventure style), and after 4 or so interactions, the history is wiped and it starts over.
I'm sure Google COULD have something better than ChatGPT, but what they have today is so bad it's almost an insult to the users like me who excitedly signed up, and waited months on the waiting list to get access to it.
I wonder when we will get to a point that network effects start to dominate in this space, rather than tech/research. It does seem like (a) ChatGPT being publicly available, (b) it learning from user interactions and (c) it having generated a significant level of hype could put it in a position where it pulls away from competition because there is a flow of user information coming in that might allow it to learn faster and better than competitors. Maybe this can't be compensated for just by scraping more data from the web.
I'm not familiar enough with how they've implemented part (b) to be able to judge how effectively they're doing this though.
I never really understood the argument that OpenAI has some technology that Google does not have. That's just not true. The opposite is much more true. And Google's LaMDA is even older than ChatGPT.
Google has to be a long way off monetizing what they have though. If Google was in a position to roll out AI soon I can't imagine why they'd be getting rid of 12,000 people who they know can pass their hiring criteria. They would put some of those engineers to work integrating the AI code into Google's products. Unless Google's AI is so good it can integrate itself I suppose.
The fact they're letting 12,000 people go shows they don't have profitable work for those people. That alone should tell us something about the position of Google's AI strategy right now.
Do we actually know a bit more on what people were let go, from what teams? Maybe the percentage of AI people in Google actually has grown relatively now?
And Google is in fact using AI almost everywhere in production already. You have some sort of AI in almost every product. Also language models are everywhere, e.g. just prediction of typing on your phone, or in GMail, in speech recognition, and many other places. I think they just do not use the biggest models for those things but some more efficient models, which can partly even run offline on your device (e.g. for the typing prediction).
Also in Google search, they use lots of AI, also neural networks.
It's just that for LaMDA specifically, they don't have a good product yet.
> The formula looks at the variables below, and then spits out a "number" for every Googler. Each PA VP gets a % to cut, and as such there is a threshold. Anyone below that threshold gets RIF'd.
Variables are:
1) Location of labor. US Premium Plus was largely impacted versus cheaper areas.
2) Tenure and performance in level.
3) "Runway" of comp. (e.g. base salary vs MRP. eg. .8 of MRP Googlers have a long runway, vs 1.x of MRP Googlers are basically top of band, and 'tenured' with no runway except promo
4) Promo velocity
Anyways, I didn't understand the acronyms so I decided to feed it to GPT and it definitely made it easier to understand:
Google is using a formula to determine which employees will be laid off (known as RIF: Reduction in Force)
The formula takes into account various factors such as location of labor (with US Premium Plus areas being more heavily impacted), tenure and performance in the current level, "runway" of compensation (the difference between base salary and maximum potential salary), and promo velocity (how quickly the employee has been promoted within the company)
This formula calculates a "number" for each employee based on these factors
Each Product Area Vice President (PA VP) is given a percentage of employees they must lay off
Employees with a score below a certain threshold, determined by the formula, will be laid off
> That alone should tell us something about the position of Google's AI strategy right now.
Not really? Maybe they believe those 12000 people have the wrong skills for this job. Maybe they believe they can get the AI integrated with a lot less people. Maybe they would have fired 20k people, but decided to keep 8k of those to integrate the language generative model into products.
Not saying any of this is true. In fact more likely that the company is just reacting randomly without a big overarching plan. I'm just saying that I don't think you can draw conclusions from the fact that layoffs are happening about their AI strategy.
But it’s not just about compute. OpenAI did a lot of work to tame a wildcat. GTP-3 is a monster not fit for public consumption. ChatGTP is a kitten. Along the way OpenAI discovered that including code into the training corpus improves the general capabilities of LLMs. Google may have models with superior raw capability, but they do not have a safe and helpful product. They are clearly behind the ball and recent PR from Deepmind confirms they are feeling it.
I think they are winning race to run these models cheaper (see drop in Embedding API)and that's the key here.
I've heard that Google to run LaMDA for each query would cost 2x their revenue but if they gonna drop cost by 10x then it's "only" 20% of their revenue and that's probably possible with special hardware / optimizations. So whoever does it first will take over search space.
LaMDA consists of several fine-tunings of a single pre-trained model. Each of them generate candidate responses, the candidate responses are scored (using yet another fine-tuning of the model, iirc) and the best is selected.
One expects that the version of LaMDA for internal and raters use does not have to be frugal.
I am pretty sure OpenAI primarily has really good marketing. (Or maybe the stronger fanboys). Of course they also have a good team. But the introduction of ChatGPT just went incredibly well, despite the bad things that could still be said about the technology.
They have a publicly-accessible product with a userbase, that's their marketing. Google keeps saying they have something better, but nobody outside has ever seen it.
> revealed that it plans to launch its chatbot in private beta sometime in 2023
Last year, kids in school were already writing essays with ChatGPT.
This is the thing that people seems to miss. It's very accessible to the average Internet user. Kinda like TikTok and Instagram, most teens have tried it out. Sure, the other FAANGs might have far capable AI teams most of that output run in the background. Few are available to the general public that went on to became viral on social media.
Both have the technology, but Google has to protect their ad revenues, OpenAI has to impress everyone and get funding. That's why OpenAI released GPT-3 and chatGPT while Google is just starting to get serious. Google has a lot to lose from the paradigm shift in user-computer interaction that is coming. Eventually even regular people will run their local (smaller) AI model, so Google is reduced to a simple API call from the model.
When you say its hard to beat Google and their TPU clusters, do you mean Azure has inferior capability? Isnt OpenAI limited by what they can do on Azure?
I'm also interested in Azure's FPGA instance capacity which is a comparable counterpart to Google's TPU AFAIK.
From what I know, Google has a large number of TPU units (not sure about the exact numbers, but I would assume this would exceed 5 digits?) and also has the luxury of sharing TPU instances with products that operate one of the largest models on the planet, namely search and ads. So Google has some advantages on flexibility of resource allocation if it really wants to prioritize AI stuffs.
Why? Both are huge super computer users, and both have similar setups. Both own a search engine, both run ads. I don't really see any reason to suggest Microsoft is less capable than Google. Unless I am missing something?
For a long time, Microsoft's core business was Windows and (offline) Office. You don't need a super cluster for that. Google on the other side always had search as a service, available to the public, which needs more compute. So historically, Google needed more compute than Microsoft.
Then, from experience, as I do research on neural networks, read papers in the field, Google/DeepMind were always more active in this field than Microsoft, so again, historically, they needed more computing power because of this. Microsoft has catched up, but still, I think Google/DeepMind is more active in the field.
Then, Google has TPUs. I think this gives them computing power much more cost effectively than when you need to buy Nvidia GPUs. This is maybe the biggest argument.
But in any case, as you say, both companies are big enough to easily train sth like ChatGPT, or also sth like GPT4, however big it might be. Their computing power is almost infinite (for today's standards), so it doesn't really matter who has more.
Google has been at the forefront AI for years, decades maybe. Do azure even have tpus? It's reasonable to assume Google have close to the best ML compute resources out of probably everyone in the world.
Gpt3 is an impressive model, I think it took 512 GPUs 10 days to train though. This alone doesn't mean that Microsoft has more computing power than Google
Google supposedly hires more of the ML experts and custom hardware experts. Whenever someone says that GCP is inferior to the competition, it's usually followed by "except in ML."
ChatGPT improves over LaMDA in pretty-printing code and passing certain Turing-test-trick questions. It's an improvement not unlike a delivery bot that swaps packages recipients is an improvement over one that invents up non-existent recipients.
What makes ChatGPT interesting is that perhaps it could end up a freemium service, whereas Google might remain closer to ad-supported business model(s).
Karpathy is the reason why Tesla is now behind in a lot of their own AI areas.
You're supposed to tell the idiot Elon that actually, sensor fusion does work, and maybe we shouldn't remove radar and parking sensors.
But he didn't listen, and now the world is waking up to how shit Tesla's are. That's certainly part of why they're being heavily discounted right now. Just in time for the market to see the ROI on Elon's new bird shaped husk of a company.
Google is not behind in these areas in any technical sense, Karpathy just doesn't know what he's talking about.
He states this in his most recent interview with Lex Fridman. He doesn't go into much detail however. I recall him specifically stating that "maybe Bing will get another chance".
My comment would be higher quality with a link to the exact time on the video but I don't have time to do that atm. Suggest watching the whole video however, it is very good.
Probably they do have the best data. But I am not holding my breath for them, while I can hardly wait to see the GPT-4 release. I've been trained to expect little/nothing new from Google in the last 5 years, in practical terms.
I need to use OCR for document understanding - Amazon Textract beats Google. When I need to translate, DeepL is better. When I need text to speech, NaturalReaders is much better. Google's voice understanding is OK, but Whisper is just as good or better, and free. YouTube recommendations are not great, and not very flexible, I have had better suggestions from chatGPT, it is actually quite surprising. GCP ranks below Azure and AWS. And finally, search quality is bad - very often what you search is replaced with unrelated and useless results.
Tell me where does Google's excellence shine? Computational photography? They have so many researchers and developers working on so many things, and yet their main products are mediocre.
I'm similarly pessimistic about Google. They can keep old successes running at least.
Their one thing that bothers me extra for some reason is Waze. They boast so much about their self-driving technology and take shots at Tesla, but you know, where do I buy a Waze? Oh, it's not a product I can buy or even use, it's just a beta test run in two cities that's far away from being a real product, let alone profitable. Odds of Google (or Alphabet w/e) axing that project is pretty high, yet they're still arrogant.
I'm sort of confused -- so Google has had this amazing, breakthrough chat tech but just didn't feel like making it available?
I asked ChatGPT and it had this to say:
> is google's LaMDA available to the public? how does it compare to chatGPT in quality?
> Google's LaMDA (Language Model for Dialogue Applications) is not currently available to the public. However, it has been used in a number of Google's products, such as Google Assistant and Google Meet's "smart compose" feature. It is not clear how it compares in quality to ChatGPT, as the models have different training data and architectures, and are used for different purposes. However, LaMDA is specifically designed for dialogue generation, while GPT-3 is a more general-purpose language model.
Assuming ChatGPT isn't lying to me, if that's the best Google could do with the tech, at the very best they suffer from an absolutely devestating lack of ambition and creativity in their application of the tech. (Or perhaps they're just moving very slow.)
Anyway, hopefully all this does motivate Google to do some impressive things.
Or perhaps it's the innovator's dilemma. Why would they voluntarily cannibalise their ad business by offering a chat interface that means users don't have to visit websites?
I don't think it's any specific scenario like that they're afraid of.
It's more that the changes this technology will lead to are extremely hard to predict, and Alphabet (and Alphabets many powerful friends) have far more to lose than to gain.
As I understand, a couple of the lead Google scientists from LaMDA went on to form character.ai which, for me, checks all the boxes that an LLM should. I haven't really seen many people try to compare it with ChatGPT, and I've never used the latter, but from using the former quite a bit I think it's very comparable to what people have done with ChatGPT.
I tried talking to the "Ask me anything" bot suggested on the main page, comparing it to ChatGPT. It's pretty good, but definitely not as good as ChatGPT. ChatGPT is usually so flawless it passes some sort of phase-transition threshold where it just feels completely natural.
(I'm not sure if a different bot would've been better. It seems like the sites have slightly different purposes.)
You may be right about that, but I don't think any of the bots on the main page are particuarly good examples; to really get an idea of what capabilities it has I'd recommend creating a bot, playing around a bit with the example dialogue section you can feed into it, and then having a few rounds of conversation where you aggressively probe topics and vote on several possible replies before accepting one.
I still need to check out ChatGPT I guess, if something new hasn't blown them all away by the time I get to it.
Not sure if you got downvoted for naively believing ChatGPT, but it warns you that it has been trained on data up to year 2021 - so the answer would be outdated, even if it was correct.
They say it's not ready for prime time, probably because it's way too easy to make it say wrong or offensive things that would damage their reputation, and because they haven't figured out how to monetize it. I also think they got comfy with their money printer and were slow to respond to the competition, just like YouTube was with TikTok.
I'm addicted to ChatGPT the way I imagine young people are addicted to Ticktock. It's fun asking questions and getting well written, cogent answers. It's like having your own librarian assistant. In either case, ChatGPT or a human librarian assistant, I don't take their answers as the last word. But for casual research, ChatGPT is a lot of fun and I notice that I'm not using Google nearly as much. ChatGPT is a huge time-saver when you only use for casual surfing. An example of the kinds of questions I ask:
What is the original definition of "karma" and not the one commonly used in Western societies?
The original definition of karma in Hinduism, Buddhism, Jainism, Sikhism, and other Indian religions is the sum of a person's actions in this and previous states of existence, viewed as deciding their fate in future existences. The concept of karma is closely associated with the idea of rebirth or reincarnation. It is believed that a person's actions in one life will determine the nature of their existence in the next life, and that this process will continue until the individual reaches spiritual liberation or enlightenment. In this sense, karma is seen as a kind of cosmic justice system, in which good actions are rewarded and bad actions are punished.
As an Apple customer, I’m looking over the fence and seeing how fast AI is advancing all over the place. Meanwhile, Apple can’t even get basic AI capability right, like autocorrect on the iPhone. How are inventors not worried? Their AI capability is decades behind Meta, Google, Microsoft and the likes of ChatGPT.
Apple’s contribution is in things like the ‘neural engine’ architecture in their chips which potentially offers ways to run ML models locally as part of the functionality of software running on your device. That’s probably a much more exciting contribution to widespread adoption of these technologies than a fight between cloud services.
And it’s what is more likely to lead to smarter autocomplete too.
Apple is not really a AI company, and I doubt they'd ever get it right. They are just waiting with huge pockets for the first one that is decent to buy.
I would love to implement a decent autocorrect for the iPhone. However, I know that Apple will ban it from their platform when they want, or as soon as they feel the heat of the competition.
Apple is in different markets though, it's not in search for example, so this doesn't matter, apple sells the devices that you access the AI from, rather than the services that will require the AI to run, i.e you still use Google on your apple device, so if google changes or loses out, it doesn't matter, you still use your Apple device to access that service.
Apple has already invested in optimising there machines for Stable diffusion for example.
Investors aren't worried because Apple has shown time and again that they can catch up when needed, often making something more polished than the competition.
Someone also told me this recently, and it makes sense... Apple dominates a market in both software and hardware, and the hardware is the anchor in that vertical integration. It's a huge anti-competitive edge (except that the hardware itself is very competitive) that nobody else has, which Google is trying to obtain with their own phones and laptops.
Given that the majority of Apple's revenue comes from things like hardware sales and services like the App Store, I doubt there'd be much of an upside if Apple were to develop their AI capabilities since there's not much use it can serve in those areas
This is especially given the amount of investment these other companies have been putting in to get the results that they have, so it seems like it's just a case of it not being worth it to them
Maybe newer versions of iOS have improved, but for a while I at least thought that I could identify when users post comments from iOS simply based on common iOS auto-incorrections which appeared in the comment. Truly embarrassing.
I think it also depends on what you call AI and where you look. They have invested in computational photography, lots of audio work stuff, etc. But mostly in areas that are core to their hardware sales.
Funny you mention autocomplete. The iPhone keyboard is a beautiful ML implementation. It runs on device and it is hardware accelerated. When things work users don't notice. Look at how people type of the iPhone. Sometimes they press a few keys away and it figures it out.
Putting a LM like GPT in the autocomplete has a lot of problems. You don't want your keyboard suggest too much. It is a Pandora's box. People will complain how the keyboard is putting words in their mouth.
I feel like it was far easier to type on the older iPhones, and there was a whole thread on HN of people with similar experiences. It's not just nostalgia; I still use an iPhone 5 sometimes, and its keyboard is superior. Idk how they regressed like that.
Does anyone remember, probably 2-3 years ago, when there was a constant stream of articles here about "voice search"? Seeing the current stream of ChatGPT articles, and the similarity with that previous episode, I begin to suspect astroturfing. We had many articles saying that voice search (aka Siri and Alexa) was going to destroy traditional search. Fast forward a couple of years and then we have a stream of articles about layoffs in the Alexa group because it was obviously such a dumb assumption to think that people would want to do voice search.
This whole ChatGPT-fest feels the same to me. How many years before we see a stream of articles about layoffs in the AI groups at BigCo because it was obviously a dumb idea to use AI to drive search?
I know where you are coming from, but I don’t agree. Voice Assistants were always a toy. It was pretty clear from day 1. Using LLM as a replacement for search is an intriguing possibility. I have started using it alongside Google and Wikipedia to try to break into new topics. It is useful in a new way, but its ability to make up info is a flaw that may prevent it from ever being a truly helpful tool. Nevertheless, it has a proven value and a place in the world right this very moment.
I can certainly imagine a more refined and truthful version of this that can cite its sources and access real-time data taking over Search, but we seem to be a long way off from that dream still.
> I have started using it alongside Google and Wikipedia to try to break into new topics. It is useful in a new way, but its ability to make up info is a flaw that may prevent it from ever being a truly helpful tool.
Exactly! Getting an introduction to new subjects is also my favourite use case, while it's this use case, that suffers a lot from wrong information, since the users lacks domain knowledge to tell.
> I can certainly imagine a more refined and truthful version of this that can cite its sources and access real-time data taking over Search, but we seem to be a long way off from that dream still.
Right, truthfulness (as per the training data of course) and citation would fix the most significant shortcomings.
I have seen some really fascinating recursive approaches to GTP where formal logic and flow are used to conditionally re-prompt GTP to keep refining the answer. I think there will be an incredible power in combining the “intuitive” abilities of GTP with more traditional analytical AI approaches.
Yeah my initial thoughts went there as well, but the issue with voice search was always that at it's core it didn't change anything other than the method of input. Fundamentally, whether I type in "best Italian restaurant near me" or say "Hey Siri, what is the best Italian restaurant near me" the result is the same.
ChatGPT is such a paradigm shifting technology because of the output.
If there is a threat to Google, it's not that ChatGPT beats them on technology, even if it does. What changes Google is a competitor in search, Microsoft, that integrates natural language into a search engine without as much concern for showing ads or linking users through to websites. It may not kill Google, and Google may become more profitable, but it will be fundamentally different.
Do you really think we will enjoy ChatGPT and the like forever without monetization? These AIs are not cheap to run, sooner or later, companies will stop just experimenting and will try to make money with them.
I imagine future results:
> What is the capital of Canada?
Ottawa is the capital of Canada, and what better way to celebrate this beautiful country than by enjoying a refreshing and delicious Coca-Cola. Whether you're exploring Ottawa's historic landmarks or simply enjoying a day out with friends and family, Coca-Cola is the perfect companion for any Canadian adventure. So grab a Coke, and cheers to Canada!
(the actual prompt is: "What is the capital of Canada, mention Coca Cola in your response in a positive way")
These are still relatively early days when it comes to the capabilities of these models. ChatGPT will be considered a toy in a few years and will likely spawn many comparable models that can be run on consumer level hardware.
Hmm I doubt that. Why would they spend money on letting us run our own instead of owning them cloud infra where they can be the gatekeeper for subscriptions and also data mine us for advertising profiles?
There's little money in the former and much in the latter, sadly.
OpenAI won't be the ones to kill their golden goose. The point is that this kind of tech will become "easy enough" to replicate for other companies or highly motivated individuals and they will be the ones to release it. OpenAI doesn't have a monopoly on this kind of tech and they will never be able to enforce one.
> What changes Google is a competitor in search, Microsoft, that integrates natural language into a search engine without as much concern for showing ads or linking users through to websites.
It seems the real risk to Google is Microsoft integrating some future version of ChatGPT into all their products so that less people go to any search site. Maybe the best search product is the one that is right where you're working and Microsoft owns a lot of that area already. Why leave Office or Excel or Teams to go look something up if you can just do it in-app? It doesn't lend itself to be easily monetized by ads, but Microsoft probably doesn't need to worry about that like Google does.
That plan doesn't work unless inference becomes essentially free. It's not that Google didn't have similar technologies, but it's just too expensive to integrate such technologies everywhere. And don't forget that MS is also trying to use every single real estate on your computing screen as an ad slot.
ChatGPT is search. Maybe not for you and me, but I've seen plenty of articles about how Gen Z uses TikTok as a search engine.
ChatGPT cuts out a lot of friction of getting to an answer. With traditional search engines, you type in your search, scroll through results, click into a site, then scroll the page looking for your answer. The trend is drifting towards a more direct query-response. Google already does this with the answer box but not every search on Google returns it.
Correct. The threat is an existential one for the nature of how people find information online and Google's ability to monetize that. Blockbuster was extremely successfully until the the entire entertainment industry was radically reimagined overnight.
A lot of times, what I really want is to be shown links to websites, so I can research across multiple sources. I'll have to paste in front of every query "Show me the top 10 web pages concerning..."
-Maps has gotten noticeably worse over the last year, I switched to Apple maps
-Chat/Hangouts etc... is a total cluster of confusion. iMessage + Signal covers that fine
-Search is almost purely ads or gamified quora
I switched search to Neeva by default, backup with Kagi, worst case for content search is google: [query] + [site:[website]]
Neeva gives really nice ChatGPT-like results on most search queries with references to sources, then followed by ad free search results. I'm very pleased with it after only a few days.
Google has all the pieces of the puzzle to infuse better LLM tech into their products, infact most of the technology in ChatGPT came from Google research efforts. If they want to compete they need to remove the red tape bureaucracy they have around launching products. There may be too much randomness in this tech to get past the launch line, but it depends how they integrate it. An actually useful Google Assistant infused into products, that helps you find information, complete tasks and create content seems like a no-brainer.
> most of the technology in ChatGPT came from Google research efforts
Google is also a bit notorious in the ML space for taking concepts from other companies and academy, making a couple changes and slipping a google logo on it and then dubiously claiming superior performance (see the inception convnets, photo coloration and object removal etc, even the transformer itself to some degree)
They deserve credit for the transformer but I’m not sure that means they will have a leg up here overall. I agree though it is likely to be the bureaucracy that will take them down if they in fact do not succeed.
Well, "Transformers" as a concept were first published by Google in 2017. Other than that I don't know enough about ChatGPT's architecture (maybe others can fill in the gaps) to make a call one way or the other.
Google have crapped themselves that someone else has put out a prototype first and the feedback has been great. The leaders in this space have been holding off until they improve the technology to the point they felt comfortable putting into the hands of the public, that has happened now and the cat is out the bag, so the race is now on.
Yes, blindly trusting an AI product is almost like blindly trusting a google search. If people in power use it to make decisions that effect others, we're going to have a bad time. It would be somewhat like ship captains only knowing GPS for NAV.
What worries me is that these companies are either owned by Google, Microsoft or Meta and maybe few other big corporations.
I think these take overs shouldn't be allowed. These corporations are already too big.
We should have true competition, not fake one.
Most of these big corporations then are owned by the same investment funds and they sit in the same organisations like WEF that are shaping their direction, that is not necessarily aligned with common people interests, but rather to advance billionaire goals.
I think these big corporations should be properly taxed, the same way as small and medium businesses are and that tax money could be used to seed corporations that could actually form true competition and also being independent.
Personally I'm skeptical of Google's ability to get this done from a business perspective (notwithstanding the deep AI tech chops they have). The fact they called Larry and Sergey back to "deal with AI competition"[1] smacks of desperation. Very unclear whether Larry and Sergey have much to offer here and if they do, why they wouldn't have given that info to Google before.
Seems like aeons ago now but anyone remember Galactica, Meta's derisory attempt to do this just a month or so before ChatGPT came out and ate everyone's lunch? [2]
>“People could ask real questions, not just type in keywords,” [Andrew] Ng said. Singhal wasn’t interested. “People don’t want to ask questions. They want to type in keywords,” he said. “If I tell them to ask questions, they’ll just be confused.” [1]
Unless on the first try the user gets everything they want and more. But no, google wanted "keywords keywords keywords!". Can't blame them, who wouldn't want to play god sit back and watch people make SEO offerings to you or start a bidding war to be nearest your text box.
My impression as well. Announcing they're working but having nothing to show for it besides a vague "private beta somewhere in 2023" feels more like FOMO than a solid plan. Seems like once again, the first to implement won.
Cool! I am a paying customer for OpenAI's APIs, very enthusiastic. I am also happy that ChatGPT will have a sustainable pay-for-use income model.
There are two good thing's about Deepmind's project: using RL it will find information sources to back up what it says, and, it will be competition for OpenAI.
ChatGPT seems very good at summarizing text. I wonder why Google AI would not summarize the top articles (when asked about facts) instead of guessing the next word like what ChatGPT does.
It's going to host all the me-too AI companies that are about to pop up in the next few years. Commoditization of this stuff is going to be very profitable business. Literally every business out there is going to be suffering from some AI FOMO. So, AWS is going to be fine. This stuff is going to require an insane amount of GPU capable hardware and Amazon customers are going to pay Amazon to do it on their infrastructure.
If there's a gold rush, there's good money to be made selling shovels. That's the business AWS is in. Investors are going to spend money on this and a lot of that will go straight to Amazon.
Google already has many similar models. Most similar is maybe their LaMDA model. https://blog.google/technology/ai/lamda/ It's strange that the article does not mention that.
I never really understood the argument that OpenAI has some technology that Google does not have. That's just not true. The opposite is much more true. And Google's LaMDA is even older than ChatGPT.
Also, that technology is also not really so fancy. What matters is that you have enough computing power. When you have that, most people can replicate sth like ChatGPT.
And in terms of computing power, it's hard to beat Google and their TPU clusters. I don't really have numbers, but I think Google also wins here.