Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The frontier models are always going to tempt you with their higher quality and quicker generation, IMO.


I’ve been mentally mapping tge models to the history of db.

Most db in the early days you had to pay for. There are still for pay db that are just better than ones you don’t pay for. Some teams think that the cost is worth the improvements and there is a (tough) business there. Fortunes were made in the early days.

But eventually open source models became good enough for many use cases and they have their own advantages. So lots of teams use them.

I think coding models might have a similar trajectory.


You make a good point -- a majority of applications are now using open source or free versions[1] of DBs.

My only feedback is: are these the same animal? Can we compare an O/S DB vs. paid/closed DB to me running an LLM locally? The biggest issue right now with LLMs is simply the cost of the hardware to run one locally, not the quality of the actual software (the model).

[1] e.g. SQL Server Express is good enough for a lot of tasks, and I guess would be roughly equivalent to the upcoming open versions of GPT vs. the frontier version.


A majority of apps nowadays are using proprietary forks of open source DBs running in the cloud, where their feature set is (slightly) rounded out and smoothed off by the cloud vendors.

Not that many projects are doing fully self-hosted RDBMS at this point. So ultimately proprietary databases still win out, they just (ab)use the Postgresql trademark to make people think they're using open source.

LLMs might go the same way. The big clouds offering proprietary fine tunes of models given away by AI labs using investor money?


That's definitely true. I could see more of the running open source models on other people's hardware model.

I dislike running local LLMs right now because I find the software kinda janky still, you often have to tweak settings, find the right model files. Basically have a bunch of domain knowledge I don't have space for in my head. On top of maintaining a high-spec piece of hardware and paying for the power costs.


Closed doesn't always win over open. People said the same thing about Windows vs Linux, but even Microsoft was forced to admit defeat and support Linux.

All it takes is some large companies commoditizing their complements. For Linux it was Google, etc. For AI it's Meta and China.

The only thing keeping Anthropic in business is geopolitics. If China were allowed full access to GPUs, they would probably die.


> The only thing keeping Anthropic in business is geopolitics. If China were allowed full access to GPUs, they would probably die.

Disagree. Anthropic have a unique approach to how they post-train their models and tune it to be the way they want it. No other lab has managed to reproduce the style and personality of Claude yet, which is currently a key reason why coders prefer it. And since post-training data is secret, it'll take other providers a lot of focused effort to get close to that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: