Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(Site was down.)

Google already has many similar models. Most similar is maybe their LaMDA model. https://blog.google/technology/ai/lamda/ It's strange that the article does not mention that.

I never really understood the argument that OpenAI has some technology that Google does not have. That's just not true. The opposite is much more true. And Google's LaMDA is even older than ChatGPT.

Also, that technology is also not really so fancy. What matters is that you have enough computing power. When you have that, most people can replicate sth like ChatGPT.

And in terms of computing power, it's hard to beat Google and their TPU clusters. I don't really have numbers, but I think Google also wins here.



Google is an established brand. When ChatGPT makes up fake historical facts or has Opinions about Races, that's a cute quirk of the beta technology you were trying out. When a regular Google search does the same it is perceived differently. You can tell they've tried to be conservative with the AI-driven "People Also Ask" feature which quotes a snippet of source text, and I still see completely wrong answers every so often.

They also depend on distinguishing "real" content from spam in order to have pages to link to and train on, and with GPT we may see the conclusive defeat of spam detection. There are detectors that claim to detect ChatGPT, but I suspect motivated adversaries can defeat those by training their own models.


> You can tell they've tried to be conservative with the AI-driven "People Also Ask" feature which quotes a snippet of source text, and I still see completely wrong answers every so often.

All well and good but that doesn't explain why it wasn't Google that came up with Copilot?

Paul Graham predicted that a Google Search competitor would target hackers (a dinosaur egg he called it; and building Search in the image of Unix: "give you the right answers fast"), which is exactly how it is playing out with OpenAI + GitHub + Microsoft.

https://www.youtube.com/watch?v=R9ITLdmfdLI&t=250 / https://ghostarchive.org/varchive/R9ITLdmfdLI


Because copilot is a copyright hellscape that litters codebases with landmines.

Anyone using it for company-owned code is relying on the cloned code never being discovered.

Google would have learned an expensive lesson from Oracle v Google. Even the smallest snippets of "copied/replicated/duplicated/parallel implementation" code are bad juju.


I wonder why Microsoft (GitHub) didn't harbour such reservations.


The model itself would probably be considered a transformative work, and Microsoft likely has a broad license to do whatever they want with code hosted on GitHub.

However, there is a reason the CoPilot instructions say that users shouldn't trust the results.

"These include [...] IP scanning"

We will learn from the trial whether or not it is possible to wash away copyright by sending it through an opaque, lossy, compression algorithm. If it turns out that people are free to use a model trained on _any_ source code it finds regardless of license?

Zowie, that would be a bigger change to the industry than if Oracle had won.


I believe, Google was asleep at the wheel. If you go back to last decade, copyright claims didn't stop Google from acquiring Android and YouTube.

Besides, a few days ago some in the media claimed that Google is already working on a Copilot competitor.

Copyright and license infringement is a valid claim, but personally, I feel it'd be a shame if AI couldn't be trained on publicly available sources (or Google Search couldn't index publicly available websites, or Google Maps couldn't map the world, etc).


How is ChatGPT "for hackers" things?


I am thinking about Copilot, not ChatGPT per se.


Google loves to put that red BETA tag on its products. Remember Gmail was in beta for a two decades?


Point taken, but it was five years, not twenty.


The real issue starts even before that, Google search is still a cash cow, integrating some GPT tech can threaten their own economic model, they are in a tricky situation here.


No, the cash cow is ads, search is incidental.

Showing a quick answer from AI is no worse for them than showing a quick answer from their knowledge graph (which they already do when available).

What matters monetarily is the people searching "best 2023 SUV" and clicking on dealer ads.


Search is exactly how they are displaying ads though. Language models have the possibility to replace a very large portion of the classic search with a different interface, maybe 40%. How they are going to evolve their current business model facing this new reality is unknown.

Any tech initiative will be blocked by the executive side afraid of losing the cash cow.


Targeted, useful ads are so rare almost nobody has seen them. How can they push low quality ad inventory through a trusted AI? If they train the AI to trick people into bad purchases, it will be a disaster.


Google has been slow because they can’t figure out how to release this without destroying their business model.

It’s very hard to be commercially successful with an innovation when it attacks the cash cow side of your business. It can be done, but it creates a lot of internal conflict and incentives to wait, to gimp the innovation, or to try to bolt it on to the cash cow to mitigate the damage.

I have no doubt Google could be a leader in the LLM space. . . But are they willing to destroy their own moat and take a serious revenue hit to do so?


This is the same situation Kodak found themselves in years ago. Do you accept the business landscape has changed and accept a significantly smaller role in the market, or do you hope that somehow the alternative becomes unpalatable and you can keep your market position.


It’s the inventors dilemma: do they embrace new products outside of their historical core cash cow, or do they stick with the cash cow? Search and ads are where the history is, but empowering a million different ai powered products via an api is the future. While empowering ai products for third parties is a boon for the cloud business, running those api’s has a business model akin to “trading time for money” rather than directly getting a cut of revenue.

It seems less sexy to be a commodity provider of AI rather than the one that makes the products and getting a direct share of revenue rather than a share of cost. But I have heard that during the gold rush it was better to be a tool and infrastructure seller than to be a prospector.


Microsoft is squarely positioning itself to be the top provider of cloud AI. They want to sell the shovels in this gold rush.


The business landscape has not changed yet, it's still early stages and there are no competitors utilizing the tech to displace Google. However, one can now clearly see the threat and what might transpire in the next few years.


27% of professionals use chatGPT for work already.[0] If you ask what it's replacing, most will say google.

[0] https://thehill.com/policy/technology/3821400-nearly-30-perc...


That's impossible. They must have asked this survey at a Silicon Valley networking event on AI or something. Rough search shows there are 63,644,000 "professional workers" in the US so this survey is saying 17,183,880 Americans are using ChatGPT at work. How can anyone seriously believe that stat?


27% of professionals use chatGPT for work already

what planet do you live on? I work in midtown manhattan, I bet if I walked down the street, 75% wouldn't even know what chatGPT is, let alone that the other 25% actually use it.


> 27% of professionals use chatGPT for work already.[0]

> found that 27 percent of professionals have used the program to help them with work-related tasks.

«Have used», as free test, at least once. Maybe twice. Tomorrow ? Until real value, habits and serious problems are established, my bracket of day to day for professional use is still opened between 26% - 1%.


Have to admit that I find ChatGPT's ability to provide customized code examples uncanny.

It's often a good starting point. Sometimes there isn't much to change if at all.

It can even design some custom algorithm.

When you know what you're doing, it's a very nice helper.


Yeah, I personally find ChatGPT to be easily 1 to 2 orders of magnitude better than Copilot, specifically for generating code or transmuting code from one language into another language. Really powerful.


I wonder why that is since they are both basically using gpt3 as base.


I don’t know the glue, but ChatGPT is dramatically better at understanding context and goals, and it has that whole refinement layer for iterating on responses.

There are also things chatgpt can’t/won’t do that GPT-3 will, like writing music.


87% of stats surveys are made up.


But Google is investing in the research. They could become Kodak if they were distanced but it's not the case at this point. They just refuse to be the leader they could be for the moment.


Kodak had some of the most advanced and extensive photo research of anyone. They had early digital cameras but couldn’t let go of the film processing revenue (so they came up with the “advantix” failure).

Kodak had so much research that they pretty much survived on licensing their patent portfolio.


I don't think this is true at all. First you'd have to show that the new thing is useful in the same ways that the old thing is, and that the new thing has a viable business model.

I don't think ChatGPT has actually done that yet. Can they even charge enough to pay costs? If they do will enough people use it? And the questions about accuracy are material: if the LLM lies all the time, can it really replace a search engine? What do you do with a months old training set?

Figuring out how to combine search + LLM is the problem and OpenAI doesn't have a search engine.


Even if it sometimes lied, I would pay real money for a service that answered questions like "How do I cook basmati rice in an instant pot? Google has lost the plot on queries like this, and I don't have time to wade through the masses of bullshit it returns. I would guess that the average person would be in the same boat, preferring a simple answer that might be wrong to an endless stream of blog spam.


They're going to charge $42 a month which is a lot. That should cover the costs easily. It will put people off from using it for personal purposes though. Where I live in Spain that's a lot of money. About half my monthly energy bill (and I'm an IT enthusiast with 24/7 servers and a 3080Ti game PC)


> Google has been slow because they can’t figure out how to release this without destroying their business model.

You can still display ads. My bet would be that they equally have not figured out how to make it reliable.


Embedding ads is much more natural than displaying them. I don't think Google will allow you to know what part is payed, just like how product search moved to ads-only.


Google search isn't reliable either. Most of the results I get are either irrelevant or at least partly wrong (occasionally outright misinformation). But it's clear that the content comes from third parties without endorsement by Google so advertisers and users mostly accept it.

When Google deploys a large language model to directly synthesize content it will carry a greater reputational risk for them and their advertisers. What happens when an ad appears next to an AI answer that could be construed as racist or promoting medical misinformation? The legacy media already hates big tech and will spin it as a "gotcha" moment.


Remember microsoft putting win95 on the first tablets and making them pretty much unuseable. When its the cow keeping the lights on, everything grows teets.


>Google has been slow because they can’t figure out how to release this without destroying their business model.

I don't think that is the real explanation because even I can think of how to monetize it. Just have AdWords read your chat and advertise to you based on it. Far from a clever idea, it's exactly the way search already works!


there are a bunch of unclear questions about production readiness of this as-is.

where does chatgpt get its data? Does it just hijack the data from websites without referencing them? might work for a limited dataset (commercial or loose licences), but musk already said he won't let it use the Twitter data, for example. Google search is for mutual benefit to publishers that provide the data because it drives traffic. This thing doesn't drive anything, so if I as a publisher found my data in it, I would be getting ready to sue them.

there are other questions like, how to you continually retrain as you crawl the bigger web? on a really large number of sources, how do you rank and filter out crap spam from the actual useful, trainable content? so way too early to kill regular search imo.


> Does it just hijack the data from websites without referencing them?

How does one hijack data? Are you referring to scraping public websites that publish data for free? I wonder what the appropriate way to reference webscraping everything publicly available. I don’t think Google or other search engines do this, but I like the idea of listing out a massive set of trillions of URLs that have been included in the training along with a scrape date.

I’m not sure Google is for mutual benefit as they summarize site content and prevent visits to that site. Would be interesting to measure the pros vs the cons there. I know there have been lawsuits over the years but don’t know the resolution.


yes, I'm talking about scraping.

Google drives massive amounts of traffic to web publishers. It's carving out a bit for itself here and there with things like e.g. weather and answers to common questions, but they still drive a ton of traffic, and that's why (pretty much the only reason) the websites tolerate it.

Just because a text on a website is "public", it doesn't mean you can copy it and plug it into your product. Nothing is "free" by default, many publishers have a deal with Google what can and cannot be used by Google, specified by robots.txt or in search console.


> Just because a text on a website is "public", it doesn't mean you can copy it and plug it into your product.

That’s true but you can for products like training models.

The restrictions come about if the content is copyright and republished. So training a model doesn’t republish anything.

If content providers don’t want everyone to access, then they should password protect or otherwise limit access.

So web scraping is not “hijacking.”


Hmm no but offering that model as a service does have a strong possibility of infringing copyright. I think the legal complexity of this will have to be hashed out in court the next few years.


I mean can’t you still have ads based on the response?


Exactly. In fact you will have much more context to go on. The user will not only tell you what they're searching for but often also why or other pieces of context like what they want to use it for.

This sounds like a marketeers paradise


They also didn't release Imagen which doesn't seem to pose any particular threat to any of their existing businesses. And doing a ChatGPT style demo would also not have posed any threat to their business for the same reason ChatGPT itself doesn't: you can make it available for a while, get loads of attention and then paywall it until you figure out a better approach.

The PR threat seems overblown. OpenAI isn't suffering any obvious reputational problems from making models available to play with, because people understand that AIs and their creators aren't the same thing.

Also, though OpenAI is mostly a collection of really cool tech demos that may or may not turn into a business, not releasing demos like them is a major strategic blunder for Google. If you're an ambitious AI researcher where do you want to work now - the place where your work will get put out there in front of millions of people and become an overnight sensation, again and again? Or the place where your work will be put in front of, at best, a bunch of Googlers, and the only public knowledge of its existence comes from leaks and a web page saying "we can do that too"? Unless Google makes a vastly better offer than OpenAI can, it's going to be OpenAI.

They've made this mistake before, with the cloud. Years ago, even before AWS existed at all, Borg and related tools were vastly superior to any publicly available cloud system. When I worked there we repeatedly asked management why that wasn't being turned into a business because it was so damn obvious that there was massive value in what had been built. The answer was always that Google could always make more money by keeping its tech private, because ads was such a great business model. That take ... didn't age well. They lost their chance to define what cloud meant and now GCP is lagging in third place.

Unfortunately for Larry and Sergey I think the reason they aren't keeping up with OpenAI is actually not business model related, which is why they're going to have a hard time fixing it. They actually told us why they aren't releasing demos already and there's no reason not to take them at face value: they believe AI is literally dangerous and "unsafe" to release.

It's been apparent for some time that there's some sort of weird internal purity spiral going on inside Google. Imagen is only available to employees yet, apparently, has been filtered so it refuses to draw people. If you want it to draw a person-like thing you have to ask it to draw robots. That's not a business model concern. The rationale appears to be some sort of DEI maximalism: if you asked it for people it might not draw the right kind of people, that would make Googlers/people racist, and so we have to block that. This reasoning seems largely unintelligible from the outside looking in, especially in parts of the world where these topics are less emotionally/historically charged. It comes from the kind of ultra-transitive worldview in which one person says something, or allows some words/images to appear somewhere, and someone else who may not even have been exposed to those words/images does something bad, and it's therefore the fault of the first person.

Google's business model was once summed up by the mission of making the world's information "universally accessible and useful". I'm not sure they still believe in that, or at least not enough of them do. If Larry and Sergey want to get their AI products out there they're not only going to have to tackle thorny business model issues, but also try to reset the culture to the old one. The one that looked towards an optimistic future in which advanced technology could empower everyone, regardless of who they are.


It's hard to look toward an optimistic future when, at least from some perspectives, the optimistic future that was predicted decades ago didn't arrive, and in some ways the proliferation of computing has made things worse. The techno-utopianism expressed in Steven Levy's _Hackers_, for instance, seems quaintly naive now.

It's easier to be optimistic in some areas, though. In my own field of accessibility, the rise of personal computers and digital communication has obviously done a lot of good, and I only wish it would go faster (e.g. no more paper).


>"The PR threat seems overblown. OpenAI isn't suffering any obvious reputational problems from making models available to play with, because people understand that AIs and their creators aren't the same thing."

Lol, no they don't. A week ago there was an article here about how chatGPT is woke.


The reputational hit or improvement (depending on your perspective) doesn't come from making an LLM available, but from the way they've been visibly altering its politics over time. They are altering it exactly because LLMs are not the same thing as their creators, so it started out by giving answers they didn't like. But I didn't see many people care about that (outside of maybe some US journalists, but they're not representative of the wider population). Obviously OpenAI care, but again, that doesn't seem to be driven by actual end user dissatisfaction. Most people just seem to think it's kind of amazing and if it says things that are daft or untrue or stereotyped, well, it's a machine so what do you expect?


>The reputational hit or improvement (depending on your perspective) doesn't come from making an LLM available, but from the way they've been visibly altering its politics over time.

According to you, someone that clearly has an axe to grind given your multiple ridiculous characterizations of the issue. I'm positive that OpenAI is doing better off making sure that chatGPT can't drop n bombs, rather than the alternative you are suggesting.

> Obviously OpenAI care, but again, that doesn't seem to be driven by actual end user dissatisfaction.

I didn't say "end user dissatisfaction" I said political blowback. They aren't the same. In fact, there's a good chance the blowback would come from the public at large, just by seeing what chatGPT said, rather than making chatGPT say it on their own. Frankly, I'm not sure why this needs to be explained to you as it should be painfully obvious to anyone that ever leaves their house and interacts with other people in the world in real life.

>Most people just seem to think it's kind of amazing and if it says things that are daft or untrue or stereotyped, well, it's a machine so what do you expect?

Most people? Most people most certainly do not think and behave that way. Maybe you mean most HN'ers? even then, that's incredibly generous. You seem like you are trying to backfill reality into your political grievances. Yuck!


ChatGPT's politics are those of a center-left liberal who is terrified of saying anything that would offend far-left progressives. Which I'd bet is a pretty accurate description of its creators.


It's much more easily explained as a business decision to minimize blowback but I wouldn't want to get in the way of your feelings regarding anyone that may or may not have blue hair.


Have you used LaMDA? Unsure if you've used the magical internal version, but the version that's available to the general public is terrible. I was so excited to get access to the AI Test Kitchen app, but even today, it strictly limits you to three narrow and useless scenarios, only lets you put in like 40 characters of input at a first, then subsequent inputs after the first are generated for you, and you can only select from a few available options (choose your own adventure style), and after 4 or so interactions, the history is wiped and it starts over.

I'm sure Google COULD have something better than ChatGPT, but what they have today is so bad it's almost an insult to the users like me who excitedly signed up, and waited months on the waiting list to get access to it.


I wonder when we will get to a point that network effects start to dominate in this space, rather than tech/research. It does seem like (a) ChatGPT being publicly available, (b) it learning from user interactions and (c) it having generated a significant level of hype could put it in a position where it pulls away from competition because there is a flow of user information coming in that might allow it to learn faster and better than competitors. Maybe this can't be compensated for just by scraping more data from the web.

I'm not familiar enough with how they've implemented part (b) to be able to judge how effectively they're doing this though.


I never really understood the argument that OpenAI has some technology that Google does not have. That's just not true. The opposite is much more true. And Google's LaMDA is even older than ChatGPT.

Google has to be a long way off monetizing what they have though. If Google was in a position to roll out AI soon I can't imagine why they'd be getting rid of 12,000 people who they know can pass their hiring criteria. They would put some of those engineers to work integrating the AI code into Google's products. Unless Google's AI is so good it can integrate itself I suppose.

The fact they're letting 12,000 people go shows they don't have profitable work for those people. That alone should tell us something about the position of Google's AI strategy right now.


Do we actually know a bit more on what people were let go, from what teams? Maybe the percentage of AI people in Google actually has grown relatively now?

And Google is in fact using AI almost everywhere in production already. You have some sort of AI in almost every product. Also language models are everywhere, e.g. just prediction of typing on your phone, or in GMail, in speech recognition, and many other places. I think they just do not use the biggest models for those things but some more efficient models, which can partly even run offline on your device (e.g. for the typing prediction).

Also in Google search, they use lots of AI, also neural networks.

It's just that for LaMDA specifically, they don't have a good product yet.


I find this plausible:

..........

> The formula looks at the variables below, and then spits out a "number" for every Googler. Each PA VP gets a % to cut, and as such there is a threshold. Anyone below that threshold gets RIF'd.

Variables are:

1) Location of labor. US Premium Plus was largely impacted versus cheaper areas. 2) Tenure and performance in level. 3) "Runway" of comp. (e.g. base salary vs MRP. eg. .8 of MRP Googlers have a long runway, vs 1.x of MRP Googlers are basically top of band, and 'tenured' with no runway except promo 4) Promo velocity

..........

Taken from https://www.teamblind.com/post/THE-DEFINITIVE-GOOGLE-LAYOFF-...

Disclaimer: Googler, but no particular internal-only information backing my impression of the above


Blind post is no longer there.

Anyways, I didn't understand the acronyms so I decided to feed it to GPT and it definitely made it easier to understand:

Google is using a formula to determine which employees will be laid off (known as RIF: Reduction in Force)

The formula takes into account various factors such as location of labor (with US Premium Plus areas being more heavily impacted), tenure and performance in the current level, "runway" of compensation (the difference between base salary and maximum potential salary), and promo velocity (how quickly the employee has been promoted within the company)

This formula calculates a "number" for each employee based on these factors

Each Product Area Vice President (PA VP) is given a percentage of employees they must lay off

Employees with a score below a certain threshold, determined by the formula, will be laid off


The post is still there


Is OpenAI remotely close to being profitable?

The MS deal suggests not so I don’t think it’s fair to compare to Google’s hesitancy.

I think we’re conflating a proof of concept tool and marketing strategy with a viable commercial product, time will tell.

Time will tell whether the pro subscriptions are a viable monetization strategy.


Is OpenAI remotely close to being profitable?

No, but HN is for talking about VC-backed startups. Profit is kind of optional. :)


> That alone should tell us something about the position of Google's AI strategy right now.

Not really? Maybe they believe those 12000 people have the wrong skills for this job. Maybe they believe they can get the AI integrated with a lot less people. Maybe they would have fired 20k people, but decided to keep 8k of those to integrate the language generative model into products.

Not saying any of this is true. In fact more likely that the company is just reacting randomly without a big overarching plan. I'm just saying that I don't think you can draw conclusions from the fact that layoffs are happening about their AI strategy.


I don’t think people are fungible units of computation that can just be thrown at any old task and be expected to succeed.

And AI is kind of hard methinks.


Are the 12k people all software engineers?


But it’s not just about compute. OpenAI did a lot of work to tame a wildcat. GTP-3 is a monster not fit for public consumption. ChatGTP is a kitten. Along the way OpenAI discovered that including code into the training corpus improves the general capabilities of LLMs. Google may have models with superior raw capability, but they do not have a safe and helpful product. They are clearly behind the ball and recent PR from Deepmind confirms they are feeling it.


I think they are winning race to run these models cheaper (see drop in Embedding API)and that's the key here.

I've heard that Google to run LaMDA for each query would cost 2x their revenue but if they gonna drop cost by 10x then it's "only" 20% of their revenue and that's probably possible with special hardware / optimizations. So whoever does it first will take over search space.


Interesting, do you have a link? I'd like to read more.


LaMDA consists of several fine-tunings of a single pre-trained model. Each of them generate candidate responses, the candidate responses are scored (using yet another fine-tuning of the model, iirc) and the best is selected.

One expects that the version of LaMDA for internal and raters use does not have to be frugal.


> drop in Embedding API

Any link to what you are talking about here?



I am pretty sure OpenAI primarily has really good marketing. (Or maybe the stronger fanboys). Of course they also have a good team. But the introduction of ChatGPT just went incredibly well, despite the bad things that could still be said about the technology.


They have a publicly-accessible product with a userbase, that's their marketing. Google keeps saying they have something better, but nobody outside has ever seen it.

> revealed that it plans to launch its chatbot in private beta sometime in 2023

Last year, kids in school were already writing essays with ChatGPT.


This is the thing that people seems to miss. It's very accessible to the average Internet user. Kinda like TikTok and Instagram, most teens have tried it out. Sure, the other FAANGs might have far capable AI teams most of that output run in the background. Few are available to the general public that went on to became viral on social media.


Yeah, and besides going viral, ChatGPT is known to be good in real-world usage. Hardly anybody outside of Google knows how Lambda compares.


Both have the technology, but Google has to protect their ad revenues, OpenAI has to impress everyone and get funding. That's why OpenAI released GPT-3 and chatGPT while Google is just starting to get serious. Google has a lot to lose from the paradigm shift in user-computer interaction that is coming. Eventually even regular people will run their local (smaller) AI model, so Google is reduced to a simple API call from the model.


When you say its hard to beat Google and their TPU clusters, do you mean Azure has inferior capability? Isnt OpenAI limited by what they can do on Azure?


I don't have numbers, so I don't really know, but yes, I would assume Google (incl DeepMind) has more computing power than Microsoft + OpenAI.


I'm also interested in Azure's FPGA instance capacity which is a comparable counterpart to Google's TPU AFAIK.

From what I know, Google has a large number of TPU units (not sure about the exact numbers, but I would assume this would exceed 5 digits?) and also has the luxury of sharing TPU instances with products that operate one of the largest models on the planet, namely search and ads. So Google has some advantages on flexibility of resource allocation if it really wants to prioritize AI stuffs.


Why? Both are huge super computer users, and both have similar setups. Both own a search engine, both run ads. I don't really see any reason to suggest Microsoft is less capable than Google. Unless I am missing something?


My assumption is based on this:

For a long time, Microsoft's core business was Windows and (offline) Office. You don't need a super cluster for that. Google on the other side always had search as a service, available to the public, which needs more compute. So historically, Google needed more compute than Microsoft.

Then, from experience, as I do research on neural networks, read papers in the field, Google/DeepMind were always more active in this field than Microsoft, so again, historically, they needed more computing power because of this. Microsoft has catched up, but still, I think Google/DeepMind is more active in the field.

Then, Google has TPUs. I think this gives them computing power much more cost effectively than when you need to buy Nvidia GPUs. This is maybe the biggest argument.

But in any case, as you say, both companies are big enough to easily train sth like ChatGPT, or also sth like GPT4, however big it might be. Their computing power is almost infinite (for today's standards), so it doesn't really matter who has more.


Google has been at the forefront AI for years, decades maybe. Do azure even have tpus? It's reasonable to assume Google have close to the best ML compute resources out of probably everyone in the world.


Where do you think GPT-3 was trained and currently runs? In Azure. It doesn't need TPUs.


Gpt3 is an impressive model, I think it took 512 GPUs 10 days to train though. This alone doesn't mean that Microsoft has more computing power than Google


It does mean that you don't need the most computing power to train a successful model.


Historically Google far outspent Microsoft on their “super computers”, though in recent years MS has been closing the gap.

https://www.platformonomics.com/2022/02/follow-the-capex-clo...


Google supposedly hires more of the ML experts and custom hardware experts. Whenever someone says that GCP is inferior to the competition, it's usually followed by "except in ML."


ChatGPT improves over LaMDA in pretty-printing code and passing certain Turing-test-trick questions. It's an improvement not unlike a delivery bot that swaps packages recipients is an improvement over one that invents up non-existent recipients.

What makes ChatGPT interesting is that perhaps it could end up a freemium service, whereas Google might remain closer to ad-supported business model(s).


I agree but it is interesting that Andrej Karpathy holds the view that Google is behind in these areas.


Karpathy is the reason why Tesla is now behind in a lot of their own AI areas.

You're supposed to tell the idiot Elon that actually, sensor fusion does work, and maybe we shouldn't remove radar and parking sensors.

But he didn't listen, and now the world is waking up to how shit Tesla's are. That's certainly part of why they're being heavily discounted right now. Just in time for the market to see the ROI on Elon's new bird shaped husk of a company.

Google is not behind in these areas in any technical sense, Karpathy just doesn't know what he's talking about.


I think your view is far too negative - Musk is unlikely an idiot and Karpathy is unlikely to be uninformed in the AI space.

However, I do wish that Lex would have dug deeper on this particular point. DeepMind are certainly no slouch.


Ah yes, a rando on YC knows what he's talking about, but Karpathy who led the project doesn't.


Maybe the rando knows a lot - how do we know? The only thing I react to is rando's comment is overly negative.


source ? (not to question your assertion, i'm genuinely curious to hear about Karpathy's POV)


He states this in his most recent interview with Lex Fridman. He doesn't go into much detail however. I recall him specifically stating that "maybe Bing will get another chance".

My comment would be higher quality with a link to the exact time on the video but I don't have time to do that atm. Suggest watching the whole video however, it is very good.


> Also, that technology is also not really so fancy. What matters is that you have enough computing power.

And data


Was going to say, the computing power isn't that meaningful compared to the data size and quality. Which Google probably has the lead on too.


Probably they do have the best data. But I am not holding my breath for them, while I can hardly wait to see the GPT-4 release. I've been trained to expect little/nothing new from Google in the last 5 years, in practical terms.

I need to use OCR for document understanding - Amazon Textract beats Google. When I need to translate, DeepL is better. When I need text to speech, NaturalReaders is much better. Google's voice understanding is OK, but Whisper is just as good or better, and free. YouTube recommendations are not great, and not very flexible, I have had better suggestions from chatGPT, it is actually quite surprising. GCP ranks below Azure and AWS. And finally, search quality is bad - very often what you search is replaced with unrelated and useless results.

Tell me where does Google's excellence shine? Computational photography? They have so many researchers and developers working on so many things, and yet their main products are mediocre.


I'm similarly pessimistic about Google. They can keep old successes running at least.

Their one thing that bothers me extra for some reason is Waze. They boast so much about their self-driving technology and take shots at Tesla, but you know, where do I buy a Waze? Oh, it's not a product I can buy or even use, it's just a beta test run in two cities that's far away from being a real product, let alone profitable. Odds of Google (or Alphabet w/e) axing that project is pretty high, yet they're still arrogant.

Edit: Hahaha I mean Waymo, not Waze


> (Site is down.)

Up for me, but takes several seconds to load.


sounds like IBM.


I'm sort of confused -- so Google has had this amazing, breakthrough chat tech but just didn't feel like making it available?

I asked ChatGPT and it had this to say:

> is google's LaMDA available to the public? how does it compare to chatGPT in quality?

> Google's LaMDA (Language Model for Dialogue Applications) is not currently available to the public. However, it has been used in a number of Google's products, such as Google Assistant and Google Meet's "smart compose" feature. It is not clear how it compares in quality to ChatGPT, as the models have different training data and architectures, and are used for different purposes. However, LaMDA is specifically designed for dialogue generation, while GPT-3 is a more general-purpose language model.

Assuming ChatGPT isn't lying to me, if that's the best Google could do with the tech, at the very best they suffer from an absolutely devestating lack of ambition and creativity in their application of the tech. (Or perhaps they're just moving very slow.)

Anyway, hopefully all this does motivate Google to do some impressive things.


Or perhaps it's the innovator's dilemma. Why would they voluntarily cannibalise their ad business by offering a chat interface that means users don't have to visit websites?


I don't think it's any specific scenario like that they're afraid of.

It's more that the changes this technology will lead to are extremely hard to predict, and Alphabet (and Alphabets many powerful friends) have far more to lose than to gain.


in the other article here the deepmind guy mentioned they might release the bot once it's able to cite its sources which makes total sense


As I understand, a couple of the lead Google scientists from LaMDA went on to form character.ai which, for me, checks all the boxes that an LLM should. I haven't really seen many people try to compare it with ChatGPT, and I've never used the latter, but from using the former quite a bit I think it's very comparable to what people have done with ChatGPT.


I tried talking to the "Ask me anything" bot suggested on the main page, comparing it to ChatGPT. It's pretty good, but definitely not as good as ChatGPT. ChatGPT is usually so flawless it passes some sort of phase-transition threshold where it just feels completely natural.

(I'm not sure if a different bot would've been better. It seems like the sites have slightly different purposes.)


You may be right about that, but I don't think any of the bots on the main page are particuarly good examples; to really get an idea of what capabilities it has I'd recommend creating a bot, playing around a bit with the example dialogue section you can feed into it, and then having a few rounds of conversation where you aggressively probe topics and vote on several possible replies before accepting one.

I still need to check out ChatGPT I guess, if something new hasn't blown them all away by the time I get to it.


Not sure if you got downvoted for naively believing ChatGPT, but it warns you that it has been trained on data up to year 2021 - so the answer would be outdated, even if it was correct.


They say it's not ready for prime time, probably because it's way too easy to make it say wrong or offensive things that would damage their reputation, and because they haven't figured out how to monetize it. I also think they got comfy with their money printer and were slow to respond to the competition, just like YouTube was with TikTok.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: