Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eh, 16 years ago CUDA was the cheap option, compared to other HPC offerings.

And there wasn't a parts shortage (modulo some cryptocurrency mining, but that impacted both GPU vendors)

And ML models weren't so large as to make 8GB of vram sound meagre.

And there weren't a bunch of venture capitalists throwing money at the work, because the state of the art models were doing uninspiring things. Like trying to tag your holiday photos, but doing it wrong because they couldn't tell a bicycle helmet and a bicycle apart.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: