Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are right, but that's not my point. The point is that it's difficult to scale in the cloud products that requires lots of AI workloads.

Here, home assistant is telling you: you can use your own infra (most people won't) or you can use our cloud.

It works because most likely the user base will be rather small and home assistant can get cloud resources as if it was infinite on that scale.

If their product was amazing, and suddenly millions of people wanted to buy the cloud version, they would have a big problem: cloud infrastructure is never infinite at scale. They would be limited by how much compute their cloud provider is able/willing to sell them, rather than how much of that small boxes they could sell, possibly loosing the opportunity to corner the market with a great product.

If you package everything, you don't have that problem (you only have the one to be able to make the product, which I agree is also not small). But in term of energy efficiency, it also does not have to be that bad: the apple silicon line has shown that you can have very efficient hardware with significant AI capabilities, if you design a SOC for that purpose, it can be energy efficient.

Maybe I'm wrong that the approach will get common, but the fact that scaling AI services to millions of users is hard stand.



But here you're assuming that your datacenter can't provide you with X GPUs, but you can manufacture 100X, which is dictated by 1% utilization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: