Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They aren't going to switch unless something is substantially better.

Except one product is 100% free and the other is mostly locked behind paid subscriptions



How long can DeepSeek stay free?

It’s already unable to keep up with demand, it will never be the default on mobile devices and businesses in the US will never trust it.


That's not really the important question.

The important question is "will this and similar optimizations to come permit local LLM use, cutting OpenAI out of the equation entirely?"


Businesses don’t even want to maintain servers locally. They definitely aren’t going to start managing servers beefy enough to run LLMs and try to run then with the reliability, availability, etc of cloud services.

This will make the cloud providers - especially AWS, GCP and to a lesser extent the also ran clouds more valuable. The other models hosted by AWS on Bedrock are already “good enough” for most business use cases.

And then consumers are definitely not going to be running LLMs locally on their computers to replicate ChatGPT (the product) anymore than they are going to get an FTP account, mount it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem and then from Windows or Mac, accessed the FTP account through built-in software instead of using cloud storage like Dropbox. [1]

Whether someone comes up with a better product than ChatGPT and overcome the brand awareness is yet to be seen.

[1] Also the iPod had no wireless, less space than the Nomad and was lame.


> And then consumers are definitely not going to be running LLMs locally on their computers to replicate ChatGPT...

Not personally. They'll let Apple handle it for them.

(This is already a thing. https://machinelearning.apple.com/research/introducing-apple...)


There is a reason I kept emphasizing the ChatGPT product. The (paid) ChatGPT product is not just a text based LLM. It can interpret images, has a built in Python runtime to offload queries that LLMs aren’t good at like math, web search, image generation, and a couple of other integrations.

The local LLM on iPhones are literally 1% as powerful as the server based models like 4o.

That’s not even considering battery considerations


> The local LLM on iPhones are literally 1% as powerful as the server based models like 4o.

Currently, yes. That's why this is a compelling advance - it makes local LLMs much more feasible, especially if this is just the first of many breakthroughs.

A lot of the hype around OpenAI has been due to the fact that buying enough capacity to run these things wasn't all that feasible for competitors. Now, it is, potentially even at the local level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: