Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see. Well picking a different model actually does help, a lot, so the main thing to consider when asking whether your assumptions are valid is whether the $10k GPU and $1k GPU are equivalent (they’re not), and what you’re paying for, because it’s not primarily for flops. Take the 2 models of GV100 for example that have exactly the same perf, and yet one is half the price of what @majke picked as the example. In this case, picking a different model helps price by 2x. The difference is memory size. Other non-perf differences that affect price include thermal properties, support level, and generation of GPU. These things come down to your goals and requirements. Maybe @majke didn’t check recently but you can buy newer GPUs than a GV100 that has even more memory, higher perf, is server certified, and costs about half, so even using the half-price the smaller GV100 would be cherry picking in my book. And if we’re talking about consumer hobbyist needs and not server farm needs, that’s when you can get a lot of compute for your money.


Thanks @wmf @dahart for the discussion.

You are both right:

- I can't just buy 3080 and stuff it into my servers due to legal.

- I can't just buy 3080 and stuff it into my servers due to availability.

- Often (as the example I given) the price-to-performance of GPU is not worth the cost of porting software.

- Often (as the example I given) the price-to-performance of GPU is not super competitive with CPU.

- Sometimes, you can make the math work. Either by picking a problem which GPU excels at (memory speed, single precision, etc), or by picking consumer grade GPU or by having access to cheap/used datacenter grade GPU's.

- In the example I given, even with cheap 3080, and say 20-30x better perf/dollar ratio of GPU's.... is it still worth it? It's not like my servers are calculating euclidean distance for 100% their CPU. The workload is diverse, nginx, dns, database, javascript. There is a bit of heavy computation, but it's not 100% of workload. In order to get GPGPU to pay for itself it would need to take over a large portion of load, which, in general case is not possible. So, I would take GPU into consideration if it was 200x-1000x better per dollar then CPU, then I could make a strong financial argument.

The point I was trying to make, is that GPU's are a good fit for a small fraction of computer workloads. For them to make sense:

- more problems would need to fit on them

- or the performance/dollar would need to improve further by orders of magnitude




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: