Is it me or is the rate of model release is accelerating to an absurd degree? Today we have Gemini 3 Deep Think and GPT 5.3 Codex Spark. Yesterday we had GLM5 and MiniMax M2.5. Five days before that we had Opus 4.6 and GPT 5.3. Then maybe two weeks I think before that we had Kimi K2.5.
I think it is because of the Chinese new year.
The Chinese labs like to publish their models arround the Chinese new year, and the US labs do not want to let a DeepSeek R1 (20 January 2025) impact event happen again, so i guess they publish models that are more capable then what they imagine Chinese labs are yet capable of producing.
And made almost zero impact, it was just a bigger version of Deepseek V2 and when mostly unnoticed because its performances weren't particularly notable especially for its size.
It was R1 with its RL-training that made the news and crashed the srock market.
In fact, many Asian countries use lunisolar calendars, which basically follow the moon for the months but add an extra month every few years so the seasons don't drift.
As these calendars also rely on time zones for date calculation, there are rare occasions where the New Year start date differs by an entire month between 2 countries.
If that's a sole problem, it should be called "Chinese-Japanese-Korean-whateverelse new year" instead. Maybe "East Asian new year" for short. (Not that there are absolutely no discrepancies within them, but they are so similar enough that new year's day almost always coincide.)
This non-problem sounds like it's on the same scale as "The British Isles", a term which is mildly annoying to Irish people but in common use everywhere else.
For another example, Singapore, one of the "many Asian countries" you mentioned, list "Chinese New Year" as the official name on government websites. [0] Also note that both California and New York is not located in Asia.
And don't get me started with "Lunar New Year? What Lunar New Year? Islamic Lunar New Year? Jewish Lunar New Year? CHINESE Lunar New Year?".
“Lunar New Year” is vague when referring to the holiday as observed by Chinese labs in China. Chinese people don’t call it Lunar New Year or Chinese New Year anyways. They call it Spring Festival (春节).
As it turns out, people in China don’t name their holidays based off of what the laws of New York or California say.
Please don't because "Lunar New Year" is ambiguous. Many other Asian cultures also have traditional lunar calendars but a different new years day. It's a bit presumptuous to claim that this is the sole "Lunar New Year" celebration.
I didn't expect language policing has reached such level. This is specifically related to China and DeepSeek who celebrates Chinese new year. Do you demand all Chinese to say happy luner new year to each other?
I'm having trouble just keeping track of all these different types of models.
Is "Gemini 3 Deep Think" even technically a model? From what I've gathered, it is built on top of Gemini 3 Pro, and appears to be adding specific thinking capabilities, more akin to adding subagents than a truly new foundational model like Opus 4.6.
Also, I don't understand the comments about Google being behind in agentic workflows. I know that the typical use of, say, Claude Code feels agentic, but also a lot of folks are using separate agent harnesses like OpenClaw anyway. You could just as easily plug Gemini 3 Pro into OpenClaw as you can Opus, right?
Can someone help me understand these distinctions? Very confused, especially regarding the agent terminology. Much appreciated!
The term “model” is one of those super overloaded terms. Depending on the conversation it can mean:
- a product (most accurate here imo)
- a specific set of weights in a neural net
- a general architecture or family of architectures (BERT models)
So while you could argue this is a “model” in the broadest sense of the term, it’s probably more descriptive to call it a product. Similarly we call LLMs “language” models even if they can do a lot more than that, for example draw images.
If someone says something is a BERT “model” I’m not going to assume they are serving the original BERT weights (definition 2).
I probably won’t even assume it’s the OG BERT. It could be ModernBERT or RoBERTa or one of any number of other variants, and simply saying it’s a BERT model is usually the right level of detail for the conversation.
It depends on time. 5 years ago it was quite well defined that it’s the last one, maybe the second one in some context. Especially when distinction was important, it was always the last one. In our case it was. We trained models to have weights. We even stored models and weights separately, because models change slower than weights. You could choose a model and a set of weights, and run them. You could change weights any time.
It seems unlikely "model" was ever equivalent in meaning to "architecture". Otherwise there would be just one "CNN model" or just one "transformer model" insofar there is a single architecture involved.
> Also, I don't understand the comments about Google being behind in agentic workflows.
It has to do with how the model is RL'd. It's not that Gemini can't be used with various agentic harnesses, like open code or open claw or theoretically even claude code. It's just that the model is trained less effectively to work with those harnesses, so it produces worse results.
I have no proof, but these deep thinking modes feel to me like an orchestrator agent + sub agents, the former being RL‘d to just keep going instead of being conditioned to stop ASAP.
More focus has been put on post-training recently. Where a full model training run can take a month and often requires multiple tries because it can collapse and fail, post-training is don't on the order of 5 or 6 days.
My assumption is that they're all either pretty happy with their base models or unwilling to do those larger runs, and post-training is turning out good results that they release quickly.
Let's come back in 12 months and discuss your singularity then. Meanwhile I spent like $30 on a few models as a test yesterday, none of them could tell me why my goroutine system was failing, even though it was painfully obvious (I purposefully added one too many wg.Done), gemini, codex, minimax 2.5, they all shat the bed on a very obvious problem but I am to believe they're 98% conscious and better at logic and math than 99% of the population.
Every new model release neckbeards come out of the basements to tell us the singularity will be there in two more weeks
Out of curiosity, did you give a test for them to validate the code?
I had a test failing because I introduced a silly comparison bug (> instead of <), and claude 4.6 opus figured out it wasn't the test the problem, but the code and fixed the bug (which I had missed).
There was a test and a very useful golang error that literally explain what was wrong. The model tried implementing a solution, failed and when I pointed out the error most of them just rolled back the "solution"
What do you believe this shows? Sometimes I have difficulty finding bugs in other people's code when they do things in ways I would never use. I can rewrite their code so it works, but I can't necessarily quickly identify the specific bug.
Expecting a model to be perfect on every problem isn't reasonable. No known entity is able to do that. AIs aren't supposed to be gods.
(Well not yet anyway - there is as yet insufficient data for a meaningful answer.)
When companies claim that AI writes 90% of their code you can expect that such a system can find obvious issues. Expectations are really high when you see statements such as the ones coming from the CEOs of the AI labs. When those expectations fall short, it's expected to see such reactions. It's the same proportionality on both sides.
It's hard to evaluate "logic" and "math", since they're made up of many largely disparate things. But I think modern AI models are clearly better at coding, for example, than 99% of the population. If you asked 100 people at your local grocery store why your goroutine system was failing, do you think multiple of them would know the answer?
Meanwhile I've been using Kimi K2T and K2.5 to work in Go with a fair amount of concurrency and it's been able to write concurrent Go code and debug issues with goroutines equal to, and much more complex then, your issue, involving race conditions and more, just fine.
(Note that org-lsp has a much improved version of the same indexer as oxen; the first was purely my design, the second I decided to listen to K2.5 more and it found a bunch of potential race conditions and fixed them)
It's basically bunch of people who see themselves as too smart to believe in God, instead they have just replaced it with AI and Singularity and attribute similar stuff to it eg. eternal life which is just heaven in religion. Amodei was hawking doubling of human lifespan to a bunch of boomers not too long ago. Ponce de León also went to search for the fountain of youth. It's a very common theme across human history. AI is just the new iteration where they mirror all their wishes and hopes.
The boomers he was talking to will be long underground before we will have any major cures for the diseases they will die from lmao. Maybe in 200 years?
> using the current models to help develop even smarter models.
That statement is plausible. However, extrapolating that to assert all the very different things which must be true to enable any form of 'singularity' would be a profound category error. There are many ways in which your first two sentences can be entirely true, while your third sentence requires a bunch of fundamental and extraordinary things to be true for which there is currently zero evidence.
Things like LLMs improving themselves in meaningful and novel ways and then iterating that self-improvement over multiple unattended generations in exponential runaway positive feedback loops resulting in tangible, real-world utility. All the impressive and rapid achievements in LLMs to date can still be true while major elements required for Foom-ish exponential take-off are still missing.
> I don’t think it’s hyperbolic to say that we may be only a single digit number of years away from the singularity.
We're back to singularity hype, but let's be real: benchmark gains are meaningless in the real world when the primary focus has shifted to gaming the metrics
I use agentic tools daily and SOTA models have certainly improved a lot in the last year. But still in a linear, "they don't light my repo on fire as often when they get a confusing compiler error" kind of way, not a "I would now trust Opus 4.6 to respond to every work email and hands-off manage my banking and investment portfolio" kind of way.
They're still afflicted by the same fundamental problems that hold LLMs back from being a truly autonomous "drop-in human replacement" that would enable an entire new world of use cases.
And finally live up to the hype/dreams many of us couldn't help but feeling was right around in the corner circa 2022/3 when things really started taking off.
Yet even Anthropic has shown the downsides to using them. I don't think it is a given that improvements in models scores and capabilities + being able to churn code as fast as we can will lead us to a singularity, we'll need more than that.
There’s about as much sense doing this as there is in putting datacenters in orbit, i.e. it isn’t impossible, but literally any other option is better.
So, yes, for the past couple weeks it has felt that way to me. But it seems to come in fits and starts. Maybe that will stop being the case, but that's how it's felt to me for awhile.