Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In fact the truth is you can compile CUDA for AMD (using hipify)

You can compile x64 to ARM and performance tanks. Does this means ARM isn't a comparable alternative to x64?

It just means their software works badly with said architecture. Could be that AMD acceleration is horrible (but then the FSR would be worse) or it could be that it's just different, or the translation layer is bad.



> or the translation layer is bad

There's no translation layer - you don't understand how/what hipify and CUDA are. CUDA is a C/C++ extension and it connotes APIs. 90% of CUDA kernel code (ie the stuff that actually runs on the SMs) does compile for AMD without any changes (intrinsics diff). hipify goes the extra step of remaining APIs to their HIP variants.

Again, all of this is to say there's no vendor lockin like clueless whiny people complain and just a superior product.


The point was, I don't expect it to work or work smoothly. CUDA was made for Nvidia, same for ROCm on AMD. Comparing CUDA on AMD or ROCm isn't a fair

If Nvidia is better at AI tasks and is superior. Great. Maybe they can finally leave GPU field.


> CUDA was made for Nvidia, same for ROCm on AMD

I'm gonna say it again, loud and clear: you don't have any understanding of what you're saying and 90% of the kernel code is exactly the same, transferrable, compilable ie it's just cpp.


Then elighten me, how is API made to work for Nvidia cards going to work smoothly for AMD.

Nothing you said prevents API makers of biasing their API to favor one hardware platform over the other.

EDIT: Which CUDA to AMD GPU translation project are you referring to? AMD's original efforts or ZLUDA?


Getting a bit difficult to understand your point of view here. The simple fact is NVDA executed well, had the strategic vision from 2006-7 onwards to invest in R&D, build complex libraries encompassing various complicated algorithms as well as allocated precious chip area to support these when no one was using the GPUs for those purposes. They took a risk. I don't use Apple, but I don't complain abt them. You are free to use AMD if you so desire. Why the hate?


> The simple fact is NVDA executed well, had the strategic vision from 2006-7 onwards to invest in R&D...

The issue here isn't as much as nVidia as it is nVidia fanboism and intellectually dishonest argument.

What determines the speed of an algorithm, all things being similar, is the raw power of hardware underneath. For that, you use the device drivers, use their equivalent level (e.g. low level on both) APIs and let them rip. You want to have as equal as comparison as you want.

What I don't expect is to take nVidia drivers, load them onto the AMD graphics card and then when the thing glitches out or underperforms say - see, it's bad.

The fact is that Hipify on AMD isn't the fastest way to run CUDA code on AMD anymore. Not since ZLUDA was created. Which raises unfortunate implications. Why wasn't Hipify able to reach the same performance? Maybe because it's a shitty translation layer. Who knows?

> I don't use Apple, but I don't complain abt them.

Just because you don't use them, doesn't mean they don't negatively impact the world in a huge way. Looks at the app store, Apple's penchant for proprietary charges, and the constant phone upgrade treadmill.


i can't explain things to people that are so steadfastly ignorant. google is free.


"If you can't explain it simply enough then you don't understand well enough."


Yea but you're comparing a mid-level api to an architecture. It's just a category error. It's like saying C is just a PDP frontend.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: