(1) Much development is already moving from CUDA to the LLM, so less of an issue. Nvidia is also doing more work to increase interoperability. Could be an issue I guess, but doesn't seem like it since there's nothing close to CUDA or the ecosystem.
(2) AMD has attracted significant investment looking at appreciation in its market cap, with a PE ration 3X Nvidia's. However, AMD is so far behind in so many ways, I don't believe it is an investment problem, but structural. Nvidia has just been preparing for this for so long it has a temendous head start, not to mention being more focused on this. Remember AMD also competes with Intel, etc.
(3) Hyperscalers already are building their own chips. It seems even Apple used its own chips for Apple Intelligence. It's relatively (which is doing a lot of lifting in this sentence because it's all HARD) not too hard to make custom chips for AI. The hard (near impossible) thing is making the cutting edge chips. And the cutting edge chips are what the OpenAIs of the world demand for training, but releasing the newest best model 1-3 months ahead of a competitors is worth so much.
If anything, I'd say the biggest threat to Nvidia in the next 1-3 years is an issue with TSMC or some new paradigm that makes Nvidia's approach suboptimal.
I don't think I understand your point in (1) that it's less of an issue because development is moving to the LLM. I can infer that maybe CUDA isn't a big part of the moat given your other points that the hard part is making cutting edge chips.
It's just the natural evolution of tech towards higher levels of abstraction. In the beginning most dev was on CUDA because the models had to be built and trained.
But since there are plenty of more advanced models now, the next level is getting built out as more developers start building applications that use the models (e.g. apps using GPT's API).
So where 5 years ago most AI dev was on CUDA, now most is on the LLMs that were built with CUDA to build applications.
(1) Much development is already moving from CUDA to the LLM, so less of an issue. Nvidia is also doing more work to increase interoperability. Could be an issue I guess, but doesn't seem like it since there's nothing close to CUDA or the ecosystem.
(2) AMD has attracted significant investment looking at appreciation in its market cap, with a PE ration 3X Nvidia's. However, AMD is so far behind in so many ways, I don't believe it is an investment problem, but structural. Nvidia has just been preparing for this for so long it has a temendous head start, not to mention being more focused on this. Remember AMD also competes with Intel, etc.
(3) Hyperscalers already are building their own chips. It seems even Apple used its own chips for Apple Intelligence. It's relatively (which is doing a lot of lifting in this sentence because it's all HARD) not too hard to make custom chips for AI. The hard (near impossible) thing is making the cutting edge chips. And the cutting edge chips are what the OpenAIs of the world demand for training, but releasing the newest best model 1-3 months ahead of a competitors is worth so much.
If anything, I'd say the biggest threat to Nvidia in the next 1-3 years is an issue with TSMC or some new paradigm that makes Nvidia's approach suboptimal.