Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there's always going to be somebody that doesn't want to bother with the complexities of setting up a model to run locally.

Spending way too much time trying to track down a very particular version of a GPU driver or similar just isn't going to be worth it if you can make an API call to some remote endpoint that's already done the heavy lifting.

Plenty of value in handling the hard part so your customer doesn't have to.

I don't know how much of the current focus on local models comes from privacy concerns, but at least some does. Once there's something like the gdpr but for data provided for inference, I think even more people will put down the docker containers and pick up the rest endpoints.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: