Hacker News new | past | comments | ask | show | jobs | submit login

The OpenAI vs. DeepSeek debate is fascinating... but I think people are oversimplifying both the challenges and the opportunities here.

First, OpenAI’s valuation is a bit wild—$157B on 13.5x forward revenue? That’s Meta/Facebook-level multiples at IPO, and OpenAI’s economics don’t scale the same way. Generative AI costs grow with usage, and compute isn’t getting cheaper fast enough to balance that out. Throw in the $6B+ infrastructure spend for 2025, and yeah, there’s a lot of financial risk. But that said... their growth is still insane. $300M monthly revenue by late 2023? That’s the kind of user adoption that others dream about, even if the profits aren’t there yet.

Now, the “no moat” argument... sure, DeepSeek showed us what’s possible on a budget, but let’s not pretend OpenAI is standing still. These open-source innovations (DeepSeek included) still build on years of foundational work by OpenAI, Google, and Meta. And while open models are narrowing the gap, it’s the ecosystem that wins long-term. Think Linux vs. proprietary Unix. OpenAI is like Microsoft here—if they play it right, they don’t need to have the best models; they need to be the default toolset for businesses and developers. (Also, let’s not forget how hard it is to maintain consistency and reliability at OpenAI’s scale—DeepSeek isn’t running 10M paying users yet.)

That said... I get the doubts. If your competitors can offer “good enough” models for free or dirt cheap, how do you justify charging $44/month (or whatever)? The killer app for AI might not even look like ChatGPT—Cursor, for example, has been far more useful for me at work. OpenAI needs to think beyond just being a platform or consumer product and figure out how to integrate AI into industry workflows in a way that really adds value. Otherwise, someone else will take that pie.

One thing OpenAI could do better? Focus on edge AI or lightweight models. DeepSeek already showed us that efficient, local models can challenge the hyperscaler approach. Why not explore something like “ChatGPT Lite” for mobile devices or edge environments? This could open new markets, especially in areas where high latency or data privacy is a concern.

Finally... the open-source thing. OpenAI’s “open” branding feels increasingly ironic, and it’s creating a trust gap. What if they flipped the script and started contributing more to the open-source ecosystem? It might look counterintuitive, but being seen as a collaborator could soften some of the backlash and even boost adoption indirectly.

OpenAI is still the frontrunner, but the path ahead isn’t clear-cut. They need to address their cost structure, competition from open models, and what comes after ChatGPT. If they don’t adapt quickly, they risk becoming Yahoo in a Google world. But if they pivot smartly—edge AI, better B2B integrations, maybe even some open-source goodwill—they still have the potential to lead this space.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: