Nvidia and the highest amount of vram you can get.
Currently the 4090, the rumor is the 4090ti will have 48gb of vram, idk if its worth waiting or not.
The more VRAM the higher paremeter count you can run all in memory (fastest by far).
AMD is almost a joke in ML. The lack of CUDA support (which is nvidia proprietary) is straight lethal, and also even though ROCM does have much better support these days, from what I've seen it's still a fraction of the performance of what it should be. I'm also not sure if you need projects to support it or not, I know pytorch has backend support for it but I'm not sure how easy it is to drop in.
Currently the 4090, the rumor is the 4090ti will have 48gb of vram, idk if its worth waiting or not.
The more VRAM the higher paremeter count you can run all in memory (fastest by far).
AMD is almost a joke in ML. The lack of CUDA support (which is nvidia proprietary) is straight lethal, and also even though ROCM does have much better support these days, from what I've seen it's still a fraction of the performance of what it should be. I'm also not sure if you need projects to support it or not, I know pytorch has backend support for it but I'm not sure how easy it is to drop in.