Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia hits $2T valuation as AI frenzy grips Wall Street (reuters.com)
139 points by cannibalXxx on Feb 23, 2024 | hide | past | favorite | 277 comments


Every non-tech company in the world is trying to figure out an "AI strategy," but most executives have no clue as to what they're doing or ought to be doing. They're like patsies with money joining a game of poker they don't understand. They are on track to learn predictable lessons that cannot be learned in any other way.

Every provider of "AI as a Service" is spending money like a drunken sailor who just came back after spending six months at sea. None of these providers are charging enough to cover the cost of development, training, or even inference. There's a lot of FOMO: no tech company wants to be left out of the party.

Nvidia, as the dominant software-plus-hardware platform (CUDA is a huge deal), is the main beneficiary of all this FOMO. A lot of unprofitable spending in AI ends up becoming... a juicy profit for Nvidia. We're witnessing large-scale transfer of wealth, from the FOMO crowd and clueless end-users to mainly one company.

The $2T question is: Is all this FOMO sustainable in the long run?


Cisco seems the most reasonable analogy to me. They were the ones selling shovels during the Internet boom. The Internet market is many orders of magnitude larger today than it was during CSCO's peak. CSCO is still highly profitable but it's stock price is still well off its peak.


>Cisco seems the most reasonable analogy to me.

Cisco at its peak in 1999 had a P/E ratio of near 200. Nvidia trailing 12 months is closer to 50. ( Today it is closer to 60 ). So I dont think Cisco is a good comparison.


AI gold rush is only just starting, though.


Their competitors are only just starting too.


Cisco had a 550bn market cap at the height of the bubble. That's a lot but NVDA is about to overtake Saudi Aramco in market cap.

How much revenue can nvidia realistically have 5 years from now, and at what margins? 100bn in revenue (NVIDIA grossed 25bn last year, so 4x top line growth) at 35% gross margins = 35bn. Slap a 25x multiple on that and you have a 900bn market cap. Discounted at 8% to today that's .65 * 900bn is 600bn. Not 2000bn. Are my numbers too pessimistic? Probably.

But still, it's hard to come up with 2029 numbers that justify a 2000bn market cap today. Even with a 5% discount rate NVDA has to be worth 3000bn 5 years from now, for current investors to get a 0% return.


> How much revenue can nvidia realistically have 5 years from now, and at what margins? 100bn in revenue (NVIDIA grossed 25bn last year, so 4x top line growth) at 35% gross margins = 35bn.

I think you are off by a year because NVDA’s fiscal year is 1 year ahead. NVDA’s fiscal 2024 just ended (ended Dec 2023) and was announced this week.

Their latest quarter (ended Dec 2023, fiscal Q4 2024) had 22.1bn in revenue. That’s more revenue than Cisco’s full year in 2000.


Oops I was off by one month.

Nvidia’s fiscal 2024 just ended (ended Jan 2024) and was reported this week.

They are currently in Q1 2025.


> (NVIDIA grossed 25bn last year, so 4x top line growth

Q4 revenue was $22B, and guiding $24B for Q1, so likely $110Bish revenue in 2024.

Their net income should be in the same ballpark as Microsoft / Apple / Amazon / Google in 2024.

Meanwhile, Apple / Amazon / Google are growing ~10% YoY, at Nvidia's current growth rate they could surpass Mag7 net income in 2025/2026.


Apple's net income was 97bn last year, Amazon's net income was 10bn. I'm not sure why you would even put those in the same category. Amazon has historically been valued on their free cash flow.

For nvidia to have 100bn in net income they would need at at least 200bn in revenue. Apple has 100bn in net income on 400bn in revenue, for comparison.

Ultimately the question is whether nvidia's moat (chips + cuda) will be durable. My guess is competitors will arrive soon and that nvidia's margins will go down. Right now nvidia seems untouchable, but reality will set in sometime next year.


what I believe are safe predictions (no dates, to make them extra safe).

- total market size will go up - Nvidia's market share will go down - Nvidia's profit margin will go down

The trillion dollar question then is whether the uplift from #1 is bigger than the downdraft from #2 and #3.


I think you potentially take the path Cisco took during the dotCom Boom and overlay it on the Nvidia trajectory during the AI Boom.


The problem with this kind of analysis is you have to bake in a bunch of assumptions. For example the Reddit link in the sister comment assumes that the x-axis scale stays the same and arbitrarily chooses a starting date and y-axis scaling to make the lines fit. This pre-supposed all kinds of things about where we are in this boom-bust cycle that may not be true.



I think P/S ratios are fairly comparable, though. Data going back that far is behind a paywall, so can't check.


Cisco’s peak was $79 at a $546bn valuation in March 2000. Their year ended July 2000 had $18.9bn revenue

https://www.cisco.com/c/dam/en_us/about/ac49/ac20/downloads/...

That’s a P/S of 29 (but one quarter in there is forward looking)

NVDA’s year ended Dec 2023 saw $61bn revenue. All 4 of quarters in there are backwards looking. If you shave off the oldest one (7.2bn) and add on this current quarter (22bn forecasted), you get 75.8bn.

Or a P/S of 26.4.

It’s closeish


Nvidia’s PE is still 3x less than Cisco’s peak.


So hold until Nvidia hits 200 P/E or play chicken with the bust of the AI bubble.


If Cisco then why not Nortel?

The Company that Broke Canada - BobbyBroccoli https://www.youtube.com/watch?v=I6xwMIUPHss


Exactly. Every startup these days is shoveling "AI" in their summary or description, trying to attract some VC money. Reminds me of the crypto/blockchain bubble a few years ago. Guess, Nvidia again was the main beneficiary :)


My favorite moment was when the Long Island Iced Tea company (a subsidiary of Long Island Brand Beverages, LLC) renamed themselves Long Blockchain Corp in 2017 causing their share price to quadruple. They then got de-listed by the SEC and several individuals were charged with insider trading. [1]

[1] https://en.wikipedia.org/wiki/Long_Blockchain_Corp


The entire tech sector is addicted to these hype bubbles. Previously they were for actually new things, things like the Internet, portable computers, the first smart phones, the first touch-screen smart phones, etc. Now we just get smaller and smaller iterations on previous products, and the marketing for these companies has no language for incremental, basic improvements so instead we're sold a phone that's 15% faster like it's the second coming of Christ, and for some reason, people keep falling for it.

I've predicted already the AI hype train will crash into the wall of reality in more or less the same way the crypto one did, and then the grifters will move on to the next thing they can over-hype.


but executives have no clue as to what they're doing or ought to be doing.

But their marketing reports know EXACTLY what to do. Just add a blurb in any press release about "AI enabled x, y, z for our a, b, and c product lines" and the board and other execs are happy.

Meanwhile 9 times out of 10 any feature that gets rolled out will be a collection of shell scripts that simply analyzes some data and spits out an answer, any answer, and "we AI enabled."


It's the classic shovels in a gold rush model. I just wonder what happened to the shovel makers after the gold rush petered out...


Much of the sf area old money comes from folks selling tools into the gold rush, for example Folgers started serving coffee to miners, Levi’s selling jeans, Wells Fargo.


Seller did great. They have a university named after them just a few miles off of Nvidia HQ.


If you don't know who (like I didn't) it was Leland Stanford.


The university is named for the seller's late son, not himself.


Since the son (Leland Stanford Jr) was named after the seller, I guess you could still say it was named after him...


The same thing that Nvidia did after graphics cards and then crypto mining became less profitable - diversified.


Nvidia has really managed to capitalize very well on multiple "Compute that does nothing important" trends. From 3D games to crypto mining, now to AI. Jensen Huang seems to have a crystal ball that can predict what people will waste compute on in the next 5 years.


Nvidia had the foresight to realize GPU's could be used for more than just graphics and put a lot of effort into CUDA.

Beyond that, there was no crystal ball, rather a useful physical product and software tools to make it accessible beyond a highly specialized set of users (I am not saying CUDA or anything is that accessible, but it makes GPU's useful to more than just game devs).


What exactly is 'something important' if those things don't meet your metric?


Is he really predicting, or are the areas where GPUs are becoming useful increasing in number?


I think you really have cause-and-effect backwards.

First off, 3D games existed before NVIDIA. NVIDIA wasn't even first to market for making dedicated 3D acceleration hardware.

CUDA existed before Bitcoin and was used for physics and chemistry simulations when it first came out.

> "Compute that does nothing important"

Entertainment is important, so to just toss 3D games aside is asinine. I'll agree that crypto-mining is completely worthless. As far AI, certainly there's a lot of hype around it right now, and we're probably gonna see a lot of AI startups go belly-up in the next couple years, but it's definitely not a fad and isn't going away.

Being highly dismissive doesn't make you cool or edgy, just so you know.


I mean, I'm a 3D gamer myself and I'm happy we have GPUs. Just being realistic about their importance. I am fully aware I'm burning kilowatts of energy so that the leaves in Red Dead Redemption 2 look beautiful.


Have you ever worn Levis jeans? He did okay.


I think you're being a bit patronizing here. It's lazy thinking to just assume everyone else is stupid except you ;)

Transformers and diffusion models have already completely changed the game and the architectures are still very primitive. They will only get better from here.

The crazy part is that for the foreseeable future, the limiting factor for improvement is compute. Train the models for more epochs is all that is needed, for now. Moores law seems to be coming back for another round.


They didn't say anything like that.

Anyone who's worked in tech for a few decades knows that what the OP describes happens regularly.

There's nothing patronizing about sensing the pattern forming again here, and recognizing that the only sure thing is that the doe-eyed forty-niners racing to California all need pick axes and camp supplies and that the axemaker is going to get rich on their dreams no matter how things turn out.

It doesn't suggest that there's no gold out that or that the game theory calculus that sends everybody chasing that gold is unsound. No individual is being called stupid.

It just emphasises that a lot of people are going to make the wrong bets on AI, using more money than they really could afford to lose, and that NVidia is the only guaranteed (absurd) winner so far. There's nothing patronizing or even contentious to that.


My post was not meant to be patronizing. Sorry if it came across as such!

I have no doubt that society will benefit from all that spending...

but I'm much less certain about the fate of all the companies incurring all that spending!


Even if 'regular' companies stopped demanding GPU's from NVIDIA, that demand would just transfer to cloud providers, which would keep buying those sweet H100. It's not a misguided AI strategy that is pumping the price. It's just a regular gold rush.


I never said most companies are buying GPU's from Nvidia.

What I said is that they're spending money, for a variety of AI and AI-related services provided by third parties.

Those third parties, in turn, are buying GPU's.


> They will only get better from here.

What's not clear is if they'll get 20% better or 500% better.


Those non-tech executives often need to signal that they do something with AI in their firms - visibly ignoring it is generally not an option even if they believe that they could. They are not necessarily the patsies you make them out to be.


True. In many cases they're acting like patsies... because they believe they have no other choice!


My manager keeps asking, “can we add AI to this?” Not once has it made sense. I don’t think he has any idea what AI is, he just knows the CIO is GenAI crazy.


I saw an ad by some AI as a service (AaaS) company I'd never heard of on YouTube the other day. In the ad people were racing around trying to figure out an AI strategy - the solution? Just let us do it for you.

There's so much money being thrown at anything with "AI" in it at this point - definitely FOMO fever. While I think Nvidia will do just fine (as the primary maker of picks in a gold rush) it seems like they're going to need to go after yet other markets - the trajectory so far has been gaming -> crypto -> AI. What's coming after that?


LoL. I never thought someone would pay to create and show an ad acknowledging that almost no one in the corporate world has a clue about any of it! If you remember the name of the company that paid for that ad, please share it!


>The $2T question is: Is all this FOMO sustainable in the long run?

How did it go with bitcoin and cryptocurrency in general? That seems the apt comparison.


Not really, I think it's more apt to compare it to the dot-com bubble, given AI's obvious value to the average consumer.


Whether an AI winter is coming or not, Nvidia is going to be selling a shitload of shovels for the next 10 years.


Enough to justify a $2tn valuation?


If AI were just one thing, maybe not. But right now "AI" is LLMs, ML, the umbrella of "generative" (image, code, voice, video), and probably others I can't think of off the top of my head. I sure wouldn't bet on all of those fizzling out, and there are a lot of shovels to sell for each one of them.


They’re likely going well behind $2 trillion soon. The hype isn’t nearly done in my opinion.


AI, like the Internet will produce a good amount of economic value, so it's not something that will be a total waste. Sure, there will be a lot of waste, but quite a bit of productivity will come from it.


It's just another boom and bust cycle, like the cycle that ended in the dot bomb. AI will be hugely profitable for the companies who figure out how to monetize it the best, but for the rest, not so much.


So this is Yahoo with ad spending but with the added extra of hardware development that increases costs and reduces manoeuvrability?


I have a little pet theory that due to the amount of compute we have today, wastes of resources like ai or blockchain or whatever comes next are a natural result of this overcapacity beyond most use cases. And businesses will always buy in to this spare capacity burn because it makes them appear innovative or productive. As a result the hardware manufacturers like NVDA will make out handsomely, swinging from vine to vine selling shovels and blue jeans to each new hype technology.


Nvidia & cloud providers are the shovel sellers, seems to be working out quite well


Yeah, being an AI provider is going to be a cost of doing business. Does your spreadsheet have an AI assistant built in? No? Then I'm switching to one that does.

It is not going to be a differentiator.

Every worker will need to be equipped with an AI assistant or the company is pedalling a bike on the freeway.

AI is a tool, like a laptop. Does having a laptop make you special? Nope. But you MUST have it.


I bought AAPL back in 2005. Since then I've heard again and again about how the stock will come plummeting down because of what a dumb laymen investor like myself would just call "Accounting Stuff" - such as P/E, etc.

The same would go for many other companies, especially in the tech sector.

Almost 20 years later, almost none of it has come true.

Maybe it is irrational, but at this point, it feels like the irrationality is likely to outlive anyone here.


AAPL's P/E has almost always been extremely low in that period (and still is iirc, they have a ton of cash on hand which you need to back out).

After 2008, AAPL was trading for something like 7x earnings, GOOG same, MSFT same.

The funniest part is that people genuinely don't understand that these examples disprove the point they are making.

(And yes, NVDA is obviously overvalued...)


I didn't understand P/E in 2005, and to be honest, I still don't fully.

All I know is that it's been one of the terms being thrown around during the past 20 years as a reason why Apple's stock price will plummet to the ground and why I'm stupid for investing a large portion of net worth in it.

At this point, my upside is immense so unless something absolutely catastrophic happens, odds are it seems I'll be having the last laugh.


If a share of a stock is $100 and the profit of the company is $5 for that year, then the price to earnings ratio (P/E) is 20. Suppose the share price and the profit remain equal, then it takes 20 years for the earnings to "cover" the price of the share. That's the spirit of the P/E ratio, in my opinion.

Others would simply tell you that it is:

  (share_price / earnings_of_that_year) == price_to_earnings_ratio
Sometimes analysts will say that a P/E ratio is too low or too high based on what industry competitors trade at. So, perhaps there was a time where you read that an analyst and/or news outlet believed that the P/E of Apple was too high relative to its analyzed peers (e.g. Microsoft - assuming Microsoft was seen as a peer back then).

Whether one should or shouldn't trade based on a particular P/E ratio has always been a matter of opinion.


> If a share of a stock is $100 and the profit of the company is $5 for that year, then the price to earnings ratio (P/E) is 20.

It's based on market cap, not share price. Market cap is a function of share price (It's just share price * number of outstanding shares), of course, but to just say it's share price leads to misunderstandings.

Otherwise, that would imply that a stock split creates a multiplier of the P/E, ie, a company with a P/E of 20 does a 1:5 split ends up with a P/E of 100 post-split, and that's certainly not what happens.


Instead of silently downvoting, how about telling me why I'm wrong.

P/E is defined usually defined as `share price / earnings per share`. Market cap is `share price * number of shares`. Via simple substitution, this means P/E is equal to `market cap / earnings`.

But calling P/E `share price / earnings` is simply incorrect.


In finance circles, the word you are using for "earnings" is usually called "net income" (or "net income for common stockholders"). When people say "earnings" they usually are referring to "earnings per share".

In short: net income is total, "earnings" is "per share"


Right now, Apple's P/E is on par with SP500 average. It's not an overpriced stock (or, contrarily, the entire SP500 is overpriced).


A genie gives you $1 per year in perpetuity. You are engaged in a bidding war to buy the genie and need to figure out your best price. The price you (the market) are willing to pay is the PE ratio.


I mean, you know this, but this is only true for the most basic, dividend issuing non-growth company. Most of the genies don't give you anything per year. Their only value is the amount you can re-sell the genie for.

EDIT: I guess I don't understand the "gives you $1" part if we're not talking about dividends.


E is the company's earnings, not what it earns (or rather pays out to in that period) the shareholder. You (and maybe GP) seem to be confusing it with dividend yield?


As an owner of company, valuation includes both profit withheld and dividend. Think you have 100% of a company with $100 in bank, it makes a profit of $20 and gives $5 as dividend and keeps $15 so now has $115 in account. So, owner got $5 in cash and the book value of company is now 115, and future valuations will reflect on that.


Is it not clear from my parenthetical there that I understand that?

> it earns (or rather pays out to in that period) the shareholder

Comment I replied to was calling dividends, and only dividends, within a given year we're calculating P/E for no less, the company Earnings. That just isn't correct, whatever your views on valuation, shareholder ownership, and market efficiency.


In an ideal world, yes.

But in the real world, company valuation is an entirely subjective matter that prices in expected future growth or losses.


Dividends don't matter. The companies just choose to buy stock from you with the profits instead of giving you cash. It's a smarter way because shareholders who want the cash can get it and those who don't can keep reinvesting profits while avoiding triggering tax events.


Yeah but why should anyone care about earnings in this exact year? P/E might be useful for your thought experiment about the genie but it's completely useless as a tool to value companies.


PE is only one number, you also look into growth as a factor to calculate PEG. So question now is for how long NVidia can grow at this crazy rate.


What? You as the shareholder own the earnings and their use.

Earnings thus either become dividends or become re-invested or are used to buy back shares.

P/E is the central metric by which to judge stocks on a fundamental level.


What I am saying is that P/E is based on current year earnings. It can be quite low but the company is terrible (Intel) or quit big but the company is great but chose not to cash out yet (Amazon from a few years ago). You should care about how much the company is likely to make in coming decades. How much they have made in a current year is a useless metric for that.

Profits are not the only thing either. Some assets give political power or power to shape the world. If NVidia made 0 it would still be very valuable because of that.


There is something called forward PE which takes into account the company's estimated growth.


It's a statistic that's easy to calculate and provides no information whatsoever whether to buy or sell.


Apple PE is 28, Proctor and Gamble is 22. It hardly seems overpriced. It also has high book value because of huge cash reserves and little debt (if any?).

2005 it was a gamble I have no idea, and if they hadn’t invented the iPhone they would have been DOA.

To some degree, they will need another revolutionary product as iPhones seem pretty mature, and I think they are still a little unsteady on that front judging from Vision reviews and sales.


Survivor bias plays a strong role in these conversations.

There were a ton of promising companies back then that no longer exist today. But you don't hear about then in HN.


> Maybe it is irrational, but at this point, it feels like the irrationality is likely to outlive anyone here.

There is no rational universe where a company can be worth almost $3T, and at the same time, investors know there is a ~% chance every year that Taiwan gets attacked or blockaded, and that company won't be selling anything for half a decade, minimum. It's like betting on the Death Star fully knowing the exhaust port issue exists. Complete with "but surely our fighters (er, US government) will prevent anyone from actually hitting that port."

At this point, I just believe the entire stock market is irrational and in the stratosphere; simply because, where else are you going to invest? Too many investors, not enough companies. But what goes up...

(Edit: I'm primarily referring to Apple here. I am aware NVIDIA primarily uses South Korea - but at the same time, if something were to happen to Taiwan, even NVIDIA will need to prioritize fab capacity.)


You're right. And to continue with this kind of reasoning: there's definitely a non-zero chance that humanity is wiped out by a large asteroid collision, or a fast spreading lethal disease, or nuclear war. Therefore since there's a non-zero chance that everything in the future will be worth exactly zero, everyone should stop investing in the market immediately.


looking at the stock market we can conclude that a taiwan invasion is deemed quite unlikely by investors.

another million dollar question: is the likelihood of another magnitude ~8 SF earthquake priced into nasdaq?


I would absolutely bet on the Death Star knowing the exhaust port issue exists.

Just because some 1% chance that a new Jedi comes along actually happens doesn’t make it a bad bet.


What if you bought Apple in 1995? or 89? Would you have remained solvent long enough to make it to 2024?

In the market timing is both 1. everything and 2. impossible to predict without cheating.


Don't invest on margin, ever.


I would not invest long term using margin. The interest rates would likely eat any profits unless you got lucky on a bet.

Short term, though? Nah, you should absolutely trade on margin.

I started investing in 2020 immediately after markets hit rock bottom from COVID. Margin multiplied my gains as I bet on the market bouncing back.

You could certainly argue that I was just lucky. I could have easily been wrong and it's possible markets could have taken a longer time to recover than I expected. But tbh, the market recovered significantly FASTER than I expected.


By definition you're risking more than 100% of your portfolio value when trading on margin, even short-term. One wrong bet can wipe you out. You shouldn't delude yourself to believe that this isn't the case because you've had one successful experience with it.

Risking getting wiped out may be a palatable prospect if you're early in life and have the time to rebuild your fortune, but it won't be if you're risking 20 years of savings.


That's not right. Margin is a great tool.


I've been loading up since 2006-2007... every quarter I get more shares from dividends. Also bought META at IPO.

I only had to sell AAPL/META for a downpayment.


are you still accumulating at these prices?


I haven’t in a bit only because buying a house has stretched me thin, but hope to start putting money in again in a few months.


AAPL wasn't riding any hype waves in 2005.


And if you did the same move with blackberry at the same time, you'd have lost most of your investment. What's your point? That survivorship bias is still alive and running in the modern world?


Nvidia of today reminds me a lot of Cisco of thirty years ago. Back then when the Internet was starting to take off, Cisco was one of the few companies actually making any money from the Internet. The frenzy to buy Cisco was intense. Thirty years later, Cisco still exists but absolutely nobody is excited about them.


> Thirty years later, Cisco still exists but absolutely nobody is excited about them.

What is the lesson you take from this, though?

To me, the lesson is that "Predicting the future is hard." No matter how well you're doing, you still need to make some sort of prediction of the future to stay on top.

If you get $2 billion and want to stay a billionaire, you can bury it underground and predict that inflation will be minimal, or you can invest it and predict that investments will be profitable. Either prediction could be wrong. (But they're not equally likely.)

In that same vein, even though Cisco was printing money, the world changed. The world will always change. It's not necessarily a failure that they couldn't predict and invent the next money printer.


For Nvidia to become like Cisco we need 2 things. 1. AI will become easy at all level with no big revolution and 2. Nvidia will not diversify and invest in the next big thing. I am not sure of 1 but certainly not 2. Maybe if the next big thing has nothing to do with computer hardware or AI.


My point is that while Cisco was an early winner, Amazon and Google started growing a few years later and were much larger winners in the end. I think there is a possibility that history may repeat with Nvidia -but I have no idea what new companies are going to rise in the AI era.


There's a world of difference between building networking equipment and GPUs though.

While Cisco was THE brand at the right moment, they didn't have any secret sauce.

Nvidia though (sadly), the industry has been playing catch-up with them forever.


Nvidia does admittedly have a lock on the market at the CUDA level but not necessarily at the Pytorch level.


Because now the competitors have caught up.

And this will happen to NVIDIA as well, but it might be a slower pace as NVIDIA tech is more expensive to develop than Cisco's


The question is how much of the gains from advances in AI the current leaders will be able to capture. I'm actually guessing more than in the Internet boom peaking in 2000 but I'm not actually sure. If the players remain static we might see more custom silicon designed by them the way Google does with its TPUs which is why I'm not putting too many eggs in the NVidia basket but also Broadcomm, TSMC, etc.


The market is really testing being irrational longer than I can withstand.


The "economics" part of the stock market is just a fig leaf to let people pretend it's not just another form of gambling. It's OK to enjoy it, lots of people like poker and blackjack. But don't pretend it's something more than that. The market is exactly as rational as a deck of cards.


Spoiler alert: The market has never been rational.


The market is so irrational that the most profitable businesses with the highest profit margins and scalability and highest barriers to entry have the highest market capitalizations.


Spoiler alert: there's a decent argument to be made that the market has never been irrational either. It's just a matter of timing to allow the markets to adjust. A market in of itself is never perfect.


Rationality is binary.

the very occurrence of speculative bubbles, (to be pedantic; where asset prices soar well beyond their intrinsic values and eventually crash) is a strong argument against market efficiency.


Disagree. Nobody is perfectly rational. So if you make rationality binary, then everyone and everything is irrational. That's not a useful definition.

It's more useful to notice that, while everyone and everything is irrational, some are far more irrational than others. That is, it's much more useful to not consider it a binary.


How do you know the "intrinsic value" of share? In most cases that is inaccessible.


Then if markets are always non-efficient, but they "work", I'd argue that they aren't irrational, they are simply quite rational in doing their overall job over time.


I think you are confusing efficiency and effectiveness. Markets, when regulated, can be an effective way to distribute goods and services.


Equity markets tend to be reasonably efficient - we can debate the extent of the efficiency and how exactly we define things like "weakly efficient" etc.


It's not, but individual investors can still be rational.


Can they? If it's possible, I don't think it's the rule.


At least it used to have the decently to play-act at being so.


Hah, when? It's always been a random walk.


The market used to crash, people used to be able to afford houses without going into proto-slavery. Thousands of people used to enter a single store on black friday. We used to be a country.


This is just a hallucinatory phase.


It's been irrational since the Industrial Revolution; future growth has never been priced in even to the level of a reasonable discount rate.


The time value of money is constantly in flux, so even if future growth were priced in / perfect market theory held, prices would still change with P/E swings as investors can get more or less for their money elsewhere.


The market is rational about AI. it's truly going to be transformational. They are irrational about NVIDIA but hey, when money flows out from NVDA it will go to the next AI company, it's not like the sector will deflate


Man, all that Nvidia stock I bought back in 2016 because I thought the Oculus Rift was cool is really starting to pay off.


Gj


Google: 1.80T (P/E 26.81)

Apple: 2.84T (P/E 28.60)

Microsoft: 3.07T (P/E 37.47)

Nvidia: 2.02T (P/E 67.63)

Quite astonishing.


Nvidia's valuation is perfectly reasonable if you expect that they'll manage to grow at a similar pace for the next couple of years and will retain their obscenely high margins. Of course for that not only does the AI hype need to continue but their competitors need to fail so bad they wouldn't even be able to offer anything even with 2-5x lower margins.

OTH Especially Apple is not doing that great at all (and if above assumptions hold they are much more overvalued than Nvidia is). Their revenue this year is estimated to be lower than in 2022 while Nvidia's more than doubled and Microsoft is growing 5% or so per year.


67.63 is trailing PE. If you look at the Forward PE, it is at much more reasonable 38.76. Considering its growth in Q4, it is not that crazy. Though now the question is how long they can sustain it.


the market is forward looking - the sooner you understand that, the less money you will lose ;)


If the market was looking forward theyd see that nvidia's product offering has minimal moat and no chance to stay differentiated over a timeline long enough for this valuation to make sense.


have you been paying attention?


Nvidia's competitors are starting to release solid hardware. They just need to release all the libraries with their open source CUDA alternative and nvidia's margins go from 75% to a more reasonable 20-30%. Every big tech company is giving nvidia billions of dollars a year, the incentives to get a cuda alternative working are too high for it to not happen.


> They just need to release all the libraries with their open source CUDA alternative

Yeah but they've tried. Year after year for the last decade. They keep failing. Maybe they're more motivated now? But CUDA isn't a moat because it's unassailable, it's a moat because nobody trying to cross it knows how to operate a shovel


Intel and AMD have tried, but their software divisions are incompetent, and there was much less money in GPUs pre LLMs. Now every big tech in the world is putting resources toward this as theyre sick of the nvidia tax.


In Gold Rush, you either believe it is a fool's errand and ignore it or if you believe in it and dig as fast as possible. It never makes sense to work on improving efficiency of shovel maker.

Google is trying for some time to use its TPU, so is Amazon. It makes so much more sense to pay few billions to Nvidia and don't risk the trillions in valuations.


you say it as if noone has tried...


This is definitely true eventually and there might be investment opportunities upstream of Nvidia to capitalize on this (TSMC and ASML come to mind).


Trailing vs. Forward P/E.


https://www.youtube.com/watch?v=hR45ja3VjGE

HFV for Hypothetical Future Value: get ready for the quazillion dollar!


Can't spell "hypothetically" without "hype".


We need to coin the term "hyperthetically".

"hypo" means under or less than.


I’m genuinely curious: have we seen profit (not revenue) increases from these companies buying these GPUs and deploying AI models?

What’s an AI product that’s actually making a lot of money (profit)? I cannot find reliable stats on how much profit OpenAI actually generates.


I wouldn’t expect anything meaningful at what is essentially a year and a half into the practical build cycle. Top to bottom nothing is optimized, use cases not fully established.

There’s an awful lot of FUD on HN about this being like crypto. In a decade of trying I never found a meaningful and practical use case for crypto beyond “HODL.” For generative AI we are finding immediately useful applications under nearly every rock we turn over, with increasingly powerful results. These aren’t speculative, they’re allowing us to do things at scales we couldn’t imagine a few years ago. I’m seeing this with most my professional network involved in high end tech. The tool chains suck, and we are limited by GPU capacity as well as inference costs, but these are short term constraints. As patterns solidify the tool chains will too, GPU capacity is ramping up rapidly, and there are plenty of optimizations on the horizon.

I for one see this much more akin to the gold rush of the internet over the 1994-2020 time frame in super fast motion. Will we overshoot? Absolutely. But the effect is real.


Lotta words to answer no.

I also would challenge your "year and a half" timeline... Gen ai has been around.


Yes, but not practically useful in general cases prior to GPT4 level models.


and you'll be saying that about GPT5 when it comes out to.

This is what hype looks like.


You can choose to believe what you choose. However I’m working with these things at a senior tech lead level at a megacorp and what I’m doing is actually really real, with real material benefits. I don’t know that another order of magnitude performance improvement would matter as much to me as better tooling, performance, context, lower costs, and higher GPU capacity. But if it does, 1000% I’ll be hyping it. I hype things that work and dismiss things that don’t, that’s what an engineer is supposed to do.


OK... So care to share what that benefit looks like? This whole thread was about asking for concrete examples of generative AI's utility.


Give an LLM the text of a PDF document. Ask the model to extract values in the document or in tables. Input the values into a spreadsheet. This is at minimum a task which costs companies around the world Hundreds of Millions of dollars a year.


what you're describing is automation and companies have been doing it for years.


Having worked on this directly and used basically every other piece of "automation" software available to do this, I can tell you that the GenAI solution is far superior. It's not close.


because apparently somehow ChatGPT magically knows what you're trying to extract.

Oh it doesn't? odd that.

ChatGPT cannot make judgement calls like what you're trying to imply it can.

ChatGPT can do some really cool things, but it's not magic.


If you tell a multi modal LLM to extract and structure the contents of a PDF it will absolutely be able to do that successfully. Further, they display a surprising ability at abductive “reasoning” (acknowledging of course they don’t reason at all), and are thus able to make pretty reasonable assumption given the semantic context of a request. Unlike traditional extraction tools that require very specific tuning and are very fragile to structural changes in layout, LLMs tend to be very resilient to such things.


This guy prompts.


what are you extracting?


Just go use GPT4 and try it. You keep repeating the same rhetoric that it doesn’t work but people have made it work, including myself.


I pay for GPT4 enterprise, try again, only this time answer the question.


> I pay for GPT4 enterprise, try again, only this time answer the question.

When I'm frustrated, I talk to ChatGPT like that.

It works as well for the LLM as it does for the humans in this thread.

What's worse is, I'd been writing some SciFi set in 2030 since well before Transformer models were invented, and predicted in early drafts of my fiction that you'd get better results from AI if you treated them with the same courtesies that you'd use for a human just because they learn by mimicking us (which turned out to be true), and yet I'm still making this mistake IRL when I talk to the AI…


Why are you paying for enterprise if it’s not useful to you?


it's almost as if you're so hyped over this you take someone cautioning that it's not magic as the enemy.

maybe stop doing that.

The PDF example being thrown around in this thread. There's a magical step in the middle that no one is acknowledging.


What Step? Do you think we're lying? I literally have built an application which takes in PDF's and extracts over 2 dozen values from it via prompting with Langchain. Do you think I'm a paid OpenAI sponsor or what?

You also realize that OpenAI has a dedicated Document Assistant which will literally extract information from a document you upload using prompts? Are you just unable to get that to work? I just don't know what you arguing at this point, like you're watching people walk backwards and then yelling that it's impossible for humans to walk backwards.


odd that you keep using that word prompts.


Yes you send a message to the API containing the context you are interested in along with questions about that context the LLM can answer. The parlance commonly used refers to that message as a prompt. I don't know if you're delusional or just completely clueless about how LLM's work.


Honestly just a skill issue on your part. RAG and one shot learning can get you incredibly far and if you can't figure it out you're ngmi. And no one is using ChatGPT for this lol.


oh snap, mr hot-shit has the skillzors.


Wait do you not realize what a prompt or a context window is? You literally think GPT just does things on it's own?

You understand that If I want to extract the Date a letter was sent, who the recipient was, what amount is due on an invoice, etc that I have to send a specific prompt asking that to GPT with the PDF in the context? Do you just literally not know how this works?


Yeah I'd love to see the results... If anything if there is a multimillion dollar benefit on the table one might argue that companies should publish this data in a more useful format. But no, let's bandaid over outdated practices like PDF-only data and burn GPU cycles to do it, with 92% accurate results.


A lot of things don’t require 100% accuracy, and raging against the worlds outdated practices doesn’t solve problems faced in the immediate present - and after spending 30 years of my career rating at those practices to absolutely no meaningful effect, a more effective bandaid that has a semantic understanding of the content is probably as good as you get. And GPU cycles are meant to be wasted.


If it replaces hundreds or thousands of man-hours of expensive labor then yes lets do it.


"Should" is just a curse word.

Be the change.


Lol I just don't read those PDFs. They are probably generated by shoddy automated tooling anyway. Garbage in, garbage out.


A lot of garbage encodings encode valuable information and resilient systems follow postels law, even if you personally do not.


This sort of bespoke automation ~~is~~ was expensive.


Sure. An example is classification. Take suppose a company that provides a platform for other companies but only if their businesses are supportable under a variety of regimes, such as industry (no cannabis say), sanctions and export controls, etc. Typically this is done by training up an army of people that poorly review websites and collateral by hand and make some determinations with a very low precision and recall, and a very high expense profile including training, benefits, HR, etc, with high attrition. Building these operations is complex, time consuming, and often fails to produce meaningfully useful results for years.

With LLM at GPT4 level ability and a modest investment in scraping, context, building an agent based tool you can automate the entire thing with a pretty high precision and recall (such as a 0.9 precision with a 0.8 recall, vs 0.4 / 0.5 for humans). Using classical NLP, schema matching, and classification techniques you could triage and improve human performance by as much as 20%. But that requires a lot of effort and provides important but modest improvements. The LLM approach however is almost effortless from an engineering effort (note all of the infrastructure is the same as the NLP/schema matching/classification/etc effort).

This may not seem earth shatteringly important to you, but for an enterprise this is monumental. It dramatically reduces risk, complexity, cost, and all the while improves outcomes by an enormous amount.

Another example is incident management. Having LLM agents monitoring incidents on slack as reduced our MTRR dramatically. They provide summaries for new joiners, broad cast management updates, etc, and take care of all the glue out response teams typically did but with a high degree of quality, low degree of variability, and extremely low latencies compared to humans.

Another example is policy management. Many enterprises are saddled with thousands of pages of external requirements from regulators, and those in turn create tens of thousands of pages of policy, and hundreds of thousands of LoC of control implementations. By indexing them into a Context+RAG LLM you can create a policy oracle, allowing you to create coverage and effectiveness testing, analyze for policy or control gaps, draft responses to auditors and regulators with total policy knowledge, answer legals questions, advise management on the implications of a business driven exception, etc. The state of the art before was a search engine dumping phrase based matches into a grid of thousands of rows of results, which was effectively useless for all of these use cases.


> training up an army of people that poorly review websites and collateral by hand and make some determinations with a very low precision and recall, and a very high expense profile including training, benefits, HR, etc, with high attrition. Building these operations is complex, time consuming, and often fails to produce meaningfully useful results for years.

followed by

> GPT4 level ability and a modest investment in scraping

oh yeah, these speech patterns are those of someone who isn't unnecessarily hyped while trying to downplay other approaches.

no bias here at all folks...


I’ve done the other approaches for decades, specifically using NLP and classifiers to try to do these and things, my friend. It’s been a really grueling effort with little success. But as an example in the real world visible to most people, the product catalog at eBay is the end product of my work in applying NLP and classifiers at building product catalogs, ontologies, and taxonomies automatically from scrapes of e-commerce sites etc. GPT4 would have turned my job into a simple exercise in structuring the classifications. It would have completely revolutionized what I had to do.

It’s readily apparent from the pessimistic HN threads they come from people who don’t actually do these things for a living and haven’t had much real world experience. These speech patterns as you call them are the way people who “do things” differentiate from those who can’t or haven’t done things. It’s called “experience.”


Thanks for the comments, that was really enlightening. Not sure why people on a tech forum seem so eager to dismiss an emerging technology rather than using their imaginations a little


no one is doing that, the issue is the overhyping.

When claude shannon first came out with his information theory everyone started trying to apply it to everything, discuss everything in terms of his theories.

He himself thought it was ridiculous and basically told everyone to stop it.

This is no different.


I just started using one of OpenAI's Assistants to turn unstructured job listings into structured JSON with inferred scores on dimensions that are aligned with my preferences and my attached resume.txt file.

Right now, I have to copy+paste my input and outputs, but my next task will be to write a Tampermonkey script to automate it.


OpenAI is also basically in the "selling shovels" business. In their case the shovel is an LLM. It's up to their customers to find something profitable to do with it.


I think they’re positioning themselves as the Apple or Android.

I equate their foundational models such as GPT to the iPhone. You can build on top of their ecosystem of models.

Everyone else are just app makers using their foundational models as the base.

It wouldn’t surprise me if they want to build an App Store on top of ChatGPT and take 30% eventually.


OpenAI is in loss, they just raised one round few days back. My guess is that they are at least burning $5B based on the amounts they are raising.


I think the commenter was asking if OAI's customers are profitable.


I think it will be like blockchain integration where it takes a few years to really start showing its value.


Blockchain is at least 15 years old, if it was useful it wouldn't take so much effort to convince people it is useful.


so far, i haven't seen any value resulting from blockchain integration.


Last I heard the Australian stock market was upgrading its settlement system (CHESS) to use blockchain technology. So they clearly see value in it.


Sorry, I looked into this and it seems like the ASX abandoned their blockchain integration project. In the end, they spent over $150 million trying to get it to work. This was not a good example to use, and I apologize.


Tech companies are all about revenue, and hoping to replicate Amazon


Just like anything these days, profit comes when they bring the inference and hosting costs down. VCs are playing the scale game with AI companies


If you can just put the parameters into the multipliers in these layers, and leave them there while you cycle through the training data, you can use a lot less bandwidth to the compute hardware. Imagine a pipeline where you can feed through a billion tokens/second, each layer of the model persistently mapped to part of a grid of chips.

I suspect we're going to end up with a wildly different architecture before it's all over.


> I suspect we're going to end up with a wildly different architecture before it's all over.

This is my naïveté showing but aren't GPUs (relatively) general purpose? Do "AI" workflows require that flexibility or would they be better served by special purpose hardware? To use the obvious analogy, what happened with cryptocurrency mining moving from GPU to ASIC.


Yes - I believe the shift is coming. For inference especially, check out Groq’s chip design and its simplicity and speed compared to general purpose GPUs.


Selling shovels in a gold rush not only makes you rich, it also isolates you from the greatest part of the risk. Nvidia got lucky.


Spending 20 years focusing on building the best chips and software for scientific computing and artificial intelligence when it was not a massive market in anticipation of it existing is not luck, it's prescience. It's also obvious in retrospect but only one company managed to do it.


You can only sell shovels as long as people are making or have the hope of striking gold. NVIDIA destroying earnings means nothing because they are just selling shovels as fast as they can. This "bubble", if it is one, will pop when everyone finds out that buying every last card didn't actually help their bottom line after all.


For scale: The global box office revenue of every single nation's film industry combined is about 42 billion dollars as of 2019. Nvidia's growth in market cap of ~350 billion since end of 2022 to 2 trillion today is almost enough to account for the entire 1.6 trillion (6%) growth in US economy for 2023.

Nvidia single-handedly carried the United States of America's 2023 GDP from recession territory (0% growth) to unprecedented massive economic boom (6% growth)!

--

https://en.wikipedia.org/wiki/Film_industry

https://www.bea.gov/news/2024/gross-domestic-product-fourth-....


Market Cap (and, therefore, growth in Market Cap) is not a component of GDP.

Also, to the extent that the chips are built in Taiwan, I'm not sure that they count in the US's GDP at all.


Or maybe NVidia was just heavily undervalued and the market just caught up. It wouldn't be the first time it happened. When AMD released first Epic/ThreadRippers which made it obvious they are going to wipe the floor with Intel, it was more than 10x cheaper than it is today.

People were shouting "P/E" and "Dividends" back then for a few years as well thinking it's crazy AMD approaches Intel's market cap.


Nvidia's revenue is ~$60 billion dollars.

But in all honesty, the box office numbers aren't as impressive as the longtail licensing and merchandising.

That's where the real money is made for the movie franchises.


Merchandising, Merchandising!

https://youtu.be/vjB8XXw9y70


Doesn't the 42bn USD of the film industry include longtail and licensing?


US Domestic Box Office was about $7.5B in 2022 and $8.9B in 2023. So if there's a number of $42B, it is including a lot of other revenue.


Kinda dumb that I didn't pour every bit of cash to their stock when chatgpt was released... it was kinda obvious... but so it goes.


I was telling my girfriend this last night, hah.

I was aware that ChatGPT was a gamechanger when it was released. I was also aware that Nvidia was the only one making the hardware.

But I'm just not the type to immediately jump to "How can I best profit from this?" Rather than researching stocks, I researched transformers more.


I gotta be honest, I looked at chat gpt, thought just doing LLM inference is not that hard, someone will produce an ASIC that destroys nvidia's margins here.

You could honestly make a hedge fund by listening to me pontificate things and then betting opposite of me. I have been bearish on MS, twitter, dropbox, bitcoin, the list goes on. It's admirable how I am so consistently wrong.


I guess to be fair, I was also bearish on Nvidia when it hit $250. So probably would have been dealing with pain since I likely would have sold after those 50% gains.

Same thing with BTC. I tried to buy 20,000 in 2010. Was too much hassle back then so I gave up. But even if I went through with it. I would have sold the moment I turned $20 in $200. Much less hold past $500 or $1000.


I feel this. I sold the bitcoin I had mined in 2010 for around $800 in 2012 and thought I was an absolute genius.


> someone will produce an ASIC that destroys nvidia's margins here.

From what I understand, if someone made an ASIC for LLM inference, such a device would effectively have the specific model locked in. You wouldn't be able to update the ASIC for newer versions of the model that alter the architecture. At best, you might be able to make one that allows you to update the weights.

I'm also not sure how cheaply you could produce such an ASIC. One that runs a 7B parameter model probably wouldn't be too expensive, but something that could rival GPT-4 would be.


i imagine that ASIC of such types would be analogue, rather than digital. Or use a different substrate than silicon - may be light/laser based? It'd have to be quite revolutionary, rather than evolutionary.


>> someone will produce an ASIC that destroys nvidia's margins here

I am no expert but it sure seems like Groq might be on to something with their LPU (language processing unit).


The issue is that the market doesn't necessarily react rationally.

The fact that OpenAI valuation raises was very understandable (but not easy to invest in practice + the corporate structure is shady).

However, Nvidia, it's a very optimistic hype, as the cards are very likely to get replaced with more specialized AI chips in the near-term.

Probably by Apple first, then whoever wakes up.

Also, once you built your big datacenter with all these cards, then you don't renew them every year.


> market doesn't necessarily react rationally.

market reacts rationally - just not your brand of rational.


Any weighted Nasdaq ETF will have a large component of NVDA.


I remember thinking back then "well, how would I know better than the market?". Still, I did buy some NVIDIA stocks, which payed off great. My thought process comes down to: do I think people are as bullish as I think they should be about AI? Every time I come to Hacker News, I still answer "no". As long as I keep seeing people not acknowledging the full potential of AI, I will keep buying.


The semi-strong from of the Efficient Market Hypothesis says that the market has already incorporated all publicly available information. But "publicly available" is doing a lot of work there. What constitutes an impressive ability for an AI[1], how hard is it for a big company to port it's tech stack from CUDA to ROCm, and things like that are public in some sense but not really in the sense of necessarily being priced in enough that you shouldn't trade on your sense of those. But on the flip side, there are a lot of things that might be obvious to an MBA with decades of experience that might be opaque to me so please don't go all in on any bet.

[1]https://xkcd.com/1425/


Seems obvious that AMD gets their shit together and releases a viable competitor, but that's a bet I'd never take.


I've had AMD reach out to me to recruit me to join their AI compilers / software team (my area of expertise), and frankly, I'm not sure they can pull it off. The hiring space is extremely competitive and they're not in a rush at all. Just my experience. They demanded candidates fly out within weeks, whereas NVIDIA and other competitors are making hiring decisions within days.


I was at Nvidia's GTC conference in 2010 and my friend who was with me suggested that we should buy Nvidia stock. I was sceptical as I felt it was highly likely that Intel, which had so many more resources, would soon release a superior product, and so didn't invest.


Well, do you think that this is the end game for ChatGPT/other models?


You can still do it


> Its rapid ascent in the past year has led analysts to draw parallels to the picks and shovels providers during the gold rush of 1800s as Nvidia's chips are used by almost all generative AI players from chatGPT-maker OpenAI to Google.

Yeah it's the exact same scenario, except for... you know... the gold part. Everything is a gold mine when you print money like there is no tomorrow and your entire economy is based on debt


Also NVIDIA has been caught round-tripping money to artificially create a higher turn-over and profit on the books around the year 2000.

They would never dare doing that again, right? Right?!


It's a bit of a leading indicator - their current revenue is from infrastructure being built by big tech making 'foundation models', there is a lot riding on:

* transformer architecture can continue to scale to become good enough and fast enough for real use-cases

* useful real-life applications (or even operating systems) will be built on this infrastructure

* alignment, data governance, privacy, or hallucination issues are solvable (or at least mitigatable).

* supportive regulatory environment, limited legal liability for users.

* China won't take advantage of the current geo-political clusterfuck to move on its ambitions for Taiwan.

I'm sure there is more, basically there is a lot that needs to happen or not happen for Nvidia to continue to be the perceived winner here.


Capitalism, hype and irrationality are a good combination for a bubble.

I remember this famous AI professor Andrew Ng in an online lecture, talking about his "AI dream" and his enthusiasm to reach general intelligence.

Even John Carmack boarded that ship, and I don't see he is going to deliver, and honestly I don't really trust him to deliver some real skepticism about AI if he doesn't deliver something good.

We really need more skeptical professors and experts to talk more on how AI is not as great as it seems. The inventor of SIRI did a great talk about this.

Apparently people seems to confuse and conflate "making quality deepfakes" and "AI is useful".

It's weird when I am a developer, and I really want LESS technology.


AI is already useful. You must be living under a rock if you can't see it. I'm a luddite in general, but just the text embeddings and retrieval stuff is worth a goldmine. Moreover, the natural language generation capability is yet another "obvious" example of utility. Since when has a computer even been able to produce even moderately natural language from structured data.

Even ignoring 'general AI' (whatever that is... no one even knows what that means), that's already enough utility. Just the searching ability makes it easily more valuable than Google and Bing in their original incarnations.


>"making quality deepfakes" and "AI is useful"

Excellent point, as the broad output of most of this so far is just more fake garbage that clutters everything up with generated spam (except to be fair the code tools like copilot). Reminds me of 3D printing, seemingly under-delivering on the massive hype in terms of "what can it actually produce". Recent VR hype feels the same in relation to expectations and cost of the products vs usefulness.


SIRI? SIRI is your go to example? Have you even been awake at all since 2020?

IF you really can't see any utility for current gen GenAI beyond deepfakes there's really no hope for you.


It’s only a frenzy because the news is focused on the stock. If you look at stock like Meta and Netflix they also have a jet engine at full blast just a few months back, and it was just passing news. This is how a stock goes up more, it also helps NVidia because they can raise more money if he choose and attract Fomo into their products.

The massive demand is priced in as Jenson indicated, it’s now a bet on how much more past the next quarter.


Nvidia Growth Potential. Current Revenue ~$100B.

Revenue of its competitors.

Intel ~60B

AMD ~20B

Broadcom ~35B

Qualcomm ~35B

Mediatek ~13B


Why aren't AMD and Intel a screaming speculative investment at this point? How far behind can they be?


On the one hand, I totally understand why they’re in this position: there’s no competition and there’s only demand.

On the other hand: I feel like this is a bubble that’s going to pop. Everyone is promising AI futures and I just see a skirmish to bring a product to market that will leave a lot of folks crashing.

Either way, NVIDIA wins.

I am amazed how much the rest of the silicon industry has slept on this.

I know AMD have their competition, but their GPU software division keeps tripping over itself.

Intel could have done more but got complacent for a solid few years.

Qualcomm feels like a competitor in the making with Nuvia but is late to the dance.

Apple is honestly the biggest competitor in terms of hardware and software ecosystem , but refuses to get into the data center.

And of course I’m leaving out the Chinese companies like Huawei who would be a force to be reckoned with if it weren’t for sanctions. But China is also a hotspot for AI/ML.


NVIDIA was first to run with the wind. Jensen Huang made a big bet on AI, sent all the researchers his cards, made the world run on CUDA. By the time AMD/Intel woke up to the AI tsunami, the sea had already turned green.


> I know AMD have their competition, but their GPU software division keeps tripping over itself.

They are actively stepping on every rake there is. Eg they just stopped supporting the drop-in-cuda project everyone is waiting for, due to there being "no business-case for CUDA on AMD GPUs" [0].

[0] https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq


Does HIPIFY not meet your standards?

https://github.com/ROCm/HIPIFY


And Google has TPUs that can compete with Nvidia, but they want to hoard it all and don't want to sell outside of GCP.

How well Nvidia can hold its current valuation depends on how useful the new AI/LLM ecosystem will become. Just chat apps and summarizing documents aren't that IMO.


If Google can sell TPUs, perhaps they can add a trillion or two to their market cap?


Google seems to be perennially disinterested in selling products that require large scale physical operations. I'm surprised they're still in the Pixel phone game.


I’d say more that Google are caught between wanting to be a service provider and a hardware company.

Pixel phones extend their services.

Selling TPU units reduces the differentiator for their services.


Why would Google sell TPUs when it can rent them out via GCP?


Because some people want to buy, not rent them, and you want those people's money....?


I don't think AI is a bubble but Nvidia's stock price might be. They aren't the only company designing GPUs.


IMHO, NVidia isn’t just competing on GPU. It’s competing on ecosystem and infrastructure so I think they’re quite untouchable for the foreseeable future.

I think AI will burst though. So few companies have a novel use, but all of them are promising a future they can’t yet reliably deliver. How much capital runway is there on this?

I’d bet (figuratively of course, as if I had any money) that AI will see a lot of casualties by end of year.


I'm bullish on AI because there are millions of people turning their attention to it and making discoveries. The democratization brought by consumer hardware being able to use these things will create a cambrian explosion similar to the open source movement for software.

It's hard to imagine the future but the imagination of the masses never ceases to amaze.


Perhaps I am more jaded because I am partially in this space, and I have a natural aversion to anything pitched as "democratization" because it's a salespitch word for "We want you to believe it will be mass market someday"

That's not to say democratizing abilities is not a worthy goal. I just don't think many of the companies rushing into this space really have a product vision that will end up being mass adopted.


I mean democratization in that people can buy a 4080 or 4090 and run and thinker with AI/LLMs themselves.


I think that’s too much of a minority to hold up a market.

The real key will be client side inference. And as more chips go the route of the Apple ones with onboard inference, I think it’ll be more appealing to the mass market.

Running LLMs locally on high end GPUs is going to always be niche.


> Apple is honestly the biggest competitor in terms of hardware and software ecosystem

Are they? It's not that obvious that they could necessarily scale up their low-power integrated GPUs (as fast as they are) to be competitive in the datacentre. AMD and Intel at least are trying and have/are developing some products.

And I'm not sure about their software? What do they really have that Intel/AMD don't?


In terms of hardware , they absolutely could scale up higher in the same way Nvidia does: more chips with interconnect on a blade.

Given their power efficiency, they could just go wider and come out on top.

And in terms of software, metal compute is the only real competitor to CUDA backends. Between their PyTorch backends, MPX, and large memory, a lot of ML engineers are going with a combo of Macs locally and NVIDIA in the cloud these days.

Intel and AMD don’t factor in at all comparatively for ML use because Intel doesn’t have a GPU backend for most things people want, and AMD has repeatedly made a mess of rocm


Nancy Pelosi is an investing visionary, clearly.


Where can I find a feed of her current positions and trades?



Wall Street is a scam. We can't move on as a society while it is in control.


People don't get it: it's not AI that is valued high, it's the dollar that is worthless.

If the US wants to win in Ukraine it needs to print like a mad man, look at the SPR; it tells you the truth (albeit with 3 months lag for mere mortals).

This is the end game, no matter what the eternal growth crowd says.

How money (and empires) die.


The US is spending the equivalent of pocket change in Ukraine, if a decade+ of war in Afghanistan and Iraq didn't blow the US up, funding the Ukrainians certainly isn't.


USA is not fighting in Ukraine. Ukraine is. To suggest otherwise disregards their exceptional heroism.


Ukraine fighting on behalf of the USA.

Better ?


Ukraine is fighting on behalf of Ukraine.


Sure, but everybody's losing besides the US military–industrial complex for which it is a true net positive


I'd argue the people of Ukraine are 'winning' compared to if Russia won. Like I really don't know what people expect Ukraine to do other than fight - get annexed and endure another Holodomor? It's not exactly a choice.


It could be worse but they took a bigger hit than germany or france during either world war demographically, best case scenario they get their entire territory back, spend 10+ years on demining, another 10+ years on rebuilding and the next 50 years getting back to their pre war population.

On the other side the US gets free weapon testing range, weakens russia like never before, keep the military industry afloat, use all their soon to expire/expired stocks, sign military and construction contracts for the eventual post war rebuilding, &c.


So stupid of Russia to invade countries then?


If Ukraine had been losing, there wouldn't be an independent country called Ukraine right now.


No, just as bad a lie.


It's referred to as a proxy war for a reason. Naked idealism is worthless. How many US dollars is their heroism worth?


How much money was lost when Hitler wasn't stopped in time? Roughly that much.


I’ve heard that most of politicians are selling stock same ad tech executives so I don’t think they expect printing


SPR = Strategic Petroleum Reserve?


yes


If US wanted Ukraine to win, the war would’ve been finished long time ago.


What do you think the US wants? Attrition Russia’s army?


Don’t know. But not Ukraine to win, for sure. Otherwise they’d be armed to the teeth by now.

https://www.forbes.com/sites/niallmccarthy/2019/09/12/the-an...


Weakening both Europe and Russia in relation to themself. Part of the rationale for NATO, for blowing up Nord Stream, and any other geopolitical strategy that causes chaos on the "World Island"[0] and opposes consolidation/strength from potential rivals.

[0] https://en.wikipedia.org/wiki/The_Geographical_Pivot_of_Hist...


citations needed.


Every economist that isn't asked to come on TV as far as I can tell




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: