Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI as currently modeled can pretty much never get enough data, so mostly data processing, and model generation.

when a query is asked of an AI it has to generate a response from all of that data and the query and response themselves become data

running an LLM on local consumer hardware can take upwards of 20 minutes for a single query, so an AI service that may be responding to up millions of requests a day would need a massive hyper-parallelized server infrastructure



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: