Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bitexploder
70 days ago
|
parent
|
context
|
favorite
| on:
Google will let companies run Gemini models in the...
My limited understanding is that CUDA wins on smaller batches and jobs but TPU wins on larger jobs. It is just easier to use and better at typical small workloads. At some point for bigger ML loads and inference TPU starts making sense.
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: