OpenAI should release its frontier model as an open-weight model. There are already open-weight models that match OpenAI's best models (at the time of their release), so the idea that OpenAI would lose something by making its frontier models open-weight doesn't hold up. With an open-weight model, they would instantly kill any proprietary competitor, similar to what Google did to all its competitors with Android.
IMO, OpenAI should focus on tooling, infra, and setting standards for AI apps and profit from those. MCP is what "custom GPT" is supposed to be; OpenAI lost that battle, among many others.
Gopher started as a freely available protocol but later changed its licensing terms, requiring fees to be paid—similar to what OpenAI has done. We know how Gopher ended up: today, most people haven't even heard of it, despite its adoption at the beginning.
There are already open-weight models that match OpenAI's best models (at the time of their release), so the idea that OpenAI would lose something by making its frontier models open-weight doesn't hold up
If true, that cuts both ways though right?
It also means we lose nothing if OpenAI releases a closed weight model.
DeepSeek R1 is far better at producing maintainable, modularized code as a coding assistant as an example.
The big deal is that the distilled versions like DeepSeek-R1-Distill-Qwen-32B are good enough that anyone with a few old 1080 Ti's sitting around can run them and get most of the performance.
When you can run gemma3/qwq/DeepSeek-R1-Distill-Qwen-.../etc... you can easily switch models when one fails too.
And you have consistent performance that doesn't degrade over time, have the ability to avoid leaking prompt data between client etc...
It is all horses for courses though. For me o1-pro is roughly the same as o1 with just higher limits etc... but is still worse than o1-preview IMHO.
In my experience the few percentage points on synthetic benchmarks that o1-pro was claimed to have doesn't matter much in real world problems.
R1 pretty much matched o1-1217 on every benchmark and the distilled models like DeepSeek-R1-Distill-Qwen-32B only lost a tiny fraction.
A few months of o1-pro costs will get you a local usable model of GPUs if you are fine with ~20 eval tokens/sec.
But if o1-preview wasn't better for your use case than o1-proe...the calculus can change.
IMO, OpenAI should focus on tooling, infra, and setting standards for AI apps and profit from those. MCP is what "custom GPT" is supposed to be; OpenAI lost that battle, among many others.
Gopher started as a freely available protocol but later changed its licensing terms, requiring fees to be paid—similar to what OpenAI has done. We know how Gopher ended up: today, most people haven't even heard of it, despite its adoption at the beginning.