The article specifically is about AI. Don't most useful LLM models require too much RAM for consumer Nvidia cards and also often need those newer features, making it irrelevant that a G80 could run some sort of cuda code?
I'm not particularly optimistic that ecosystem support will ever pan out for AMD to be viable but this seems to be giving a bit too much credit to Nvidia for democratizing AI development, which is a stretch.
First of all, LLMs are not the only AI in existence. A lot of ML, stats, and compute can be run on consumer grade GPUs. There are plenty of problems that aren't even applicable with an LLM.
Second, you absolutely can run and fine tune many open source LLMs on one or more 3090s at a time..
But being able just to tinker, learn to write code, etc.. on a consumer GPU is a gateway to the more compute focused cards.
I'm not particularly optimistic that ecosystem support will ever pan out for AMD to be viable but this seems to be giving a bit too much credit to Nvidia for democratizing AI development, which is a stretch.