Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it me or is the rate of model release is accelerating to an absurd degree? Today we have Gemini 3 Deep Think and GPT 5.3 Codex Spark. Yesterday we had GLM5 and MiniMax M2.5. Five days before that we had Opus 4.6 and GPT 5.3. Then maybe two weeks I think before that we had Kimi K2.5.
 help



I think it is because of the Chinese new year. The Chinese labs like to publish their models arround the Chinese new year, and the US labs do not want to let a DeepSeek R1 (20 January 2025) impact event happen again, so i guess they publish models that are more capable then what they imagine Chinese labs are yet capable of producing.

Singularity or just Chinese New Year?

The Singularity will occur on a Tuesday, during Chinese New Year

I guess. Deepseek v3 was released on boxing day a month prior

https://api-docs.deepseek.com/news/news1226


And made almost zero impact, it was just a bigger version of Deepseek V2 and when mostly unnoticed because its performances weren't particularly notable especially for its size.

It was R1 with its RL-training that made the news and crashed the srock market.


Aren't we saying "lunar new year" now?

I don't think so; there are different lunar calendars.

In fact, many Asian countries use lunisolar calendars, which basically follow the moon for the months but add an extra month every few years so the seasons don't drift.

As these calendars also rely on time zones for date calculation, there are rare occasions where the New Year start date differs by an entire month between 2 countries.


If that's a sole problem, it should be called "Chinese-Japanese-Korean-whateverelse new year" instead. Maybe "East Asian new year" for short. (Not that there are absolutely no discrepancies within them, but they are so similar enough that new year's day almost always coincide.)

It's not Japanese either.

This non-problem sounds like it's on the same scale as "The British Isles", a term which is mildly annoying to Irish people but in common use everywhere else.


[flagged]


For another example, Singapore, one of the "many Asian countries" you mentioned, list "Chinese New Year" as the official name on government websites. [0] Also note that both California and New York is not located in Asia.

And don't get me started with "Lunar New Year? What Lunar New Year? Islamic Lunar New Year? Jewish Lunar New Year? CHINESE Lunar New Year?".

[0] https://www.mom.gov.sg/employment-practices/public-holidays


“Lunar New Year” is vague when referring to the holiday as observed by Chinese labs in China. Chinese people don’t call it Lunar New Year or Chinese New Year anyways. They call it Spring Festival (春节).

As it turns out, people in China don’t name their holidays based off of what the laws of New York or California say.


Please don't because "Lunar New Year" is ambiguous. Many other Asian cultures also have traditional lunar calendars but a different new years day. It's a bit presumptuous to claim that this is the sole "Lunar New Year" celebration.

https://en.wikipedia.org/wiki/Indian_New_Year%27s_days#Calen...

https://en.wikipedia.org/wiki/Islamic_New_Year

https://en.wikipedia.org/wiki/Nowruz


I didn't expect language policing has reached such level. This is specifically related to China and DeepSeek who celebrates Chinese new year. Do you demand all Chinese to say happy luner new year to each other?

"Happy Holidays" comes to the diaspora

Happy Lunar Holidays to you!

"Lunar New Year" is perhaps over-general, since there are non-Asian lunar calendars, such as the Hebrew and Islamic calendars.

That said, "Lunar New Year" is probably as good a compromise as any, since we have other names for the Hebrew and Islamic New Years.


There's more than one Asian lunar calendar: https://news.ycombinator.com/item?id=46996396.

The Islamic calendar originated in Arabia. Calling it an Asian lunar calendar wouldn't be inaccurate.


This all seems like a plot to get everyone worshipping the Roman goddess Luna.

But they're Chinese companies specifically, in this case

Where do all of those Asian countries have that tradition from?

Have you ever had a Polish Sausage? Did it make you Polish?


I'm having trouble just keeping track of all these different types of models.

Is "Gemini 3 Deep Think" even technically a model? From what I've gathered, it is built on top of Gemini 3 Pro, and appears to be adding specific thinking capabilities, more akin to adding subagents than a truly new foundational model like Opus 4.6.

Also, I don't understand the comments about Google being behind in agentic workflows. I know that the typical use of, say, Claude Code feels agentic, but also a lot of folks are using separate agent harnesses like OpenClaw anyway. You could just as easily plug Gemini 3 Pro into OpenClaw as you can Opus, right?

Can someone help me understand these distinctions? Very confused, especially regarding the agent terminology. Much appreciated!


The term “model” is one of those super overloaded terms. Depending on the conversation it can mean:

- a product (most accurate here imo)

- a specific set of weights in a neural net

- a general architecture or family of architectures (BERT models)

So while you could argue this is a “model” in the broadest sense of the term, it’s probably more descriptive to call it a product. Similarly we call LLMs “language” models even if they can do a lot more than that, for example draw images.


I'm pretty sure only the second is properly called a model, and "BERT models" are simply models with the BERT architecture.

If someone says something is a BERT “model” I’m not going to assume they are serving the original BERT weights (definition 2).

I probably won’t even assume it’s the OG BERT. It could be ModernBERT or RoBERTa or one of any number of other variants, and simply saying it’s a BERT model is usually the right level of detail for the conversation.


It depends on time. 5 years ago it was quite well defined that it’s the last one, maybe the second one in some context. Especially when distinction was important, it was always the last one. In our case it was. We trained models to have weights. We even stored models and weights separately, because models change slower than weights. You could choose a model and a set of weights, and run them. You could change weights any time.

Then marketing, and huge amount of capital came.


It seems unlikely "model" was ever equivalent in meaning to "architecture". Otherwise there would be just one "CNN model" or just one "transformer model" insofar there is a single architecture involved.

First of all, hyperparameters. Second, organization, or connections. 3rd, cost function. 4th, activation function. 5th type of learning. Etc.

These are not weights. These were parts of models.


> Also, I don't understand the comments about Google being behind in agentic workflows.

It has to do with how the model is RL'd. It's not that Gemini can't be used with various agentic harnesses, like open code or open claw or theoretically even claude code. It's just that the model is trained less effectively to work with those harnesses, so it produces worse results.


There are hints this is a preview to Gemini 3.1.

I have no proof, but these deep thinking modes feel to me like an orchestrator agent + sub agents, the former being RL‘d to just keep going instead of being conditioned to stop ASAP.

More focus has been put on post-training recently. Where a full model training run can take a month and often requires multiple tries because it can collapse and fail, post-training is don't on the order of 5 or 6 days.

My assumption is that they're all either pretty happy with their base models or unwilling to do those larger runs, and post-training is turning out good results that they release quickly.


They are using the current models to help develop even smarter models. Each generation of model can help even more for the next generation.

I don’t think it’s hyperbolic to say that we may be only a single digit number of years away from the singularity.


I must be holding these things wrong because I'm not seeing any of these God like superpowers everyone seem to enjoy.

Who said they’re godlike today?

And yes, you are probably using them wrong if you don’t find them useful or don’t see the rapid improvement.


Let's come back in 12 months and discuss your singularity then. Meanwhile I spent like $30 on a few models as a test yesterday, none of them could tell me why my goroutine system was failing, even though it was painfully obvious (I purposefully added one too many wg.Done), gemini, codex, minimax 2.5, they all shat the bed on a very obvious problem but I am to believe they're 98% conscious and better at logic and math than 99% of the population.

Every new model release neckbeards come out of the basements to tell us the singularity will be there in two more weeks


On the flip side, twice I put about 800K tokens of code into Gemini and asked it to find why my code was misbehaving, and it found it.

The logic related to the bug wasn't all contained in one file, but across several files.

This was Gemini 2.5 Pro. A whole generation old.


Out of curiosity, did you give a test for them to validate the code?

I had a test failing because I introduced a silly comparison bug (> instead of <), and claude 4.6 opus figured out it wasn't the test the problem, but the code and fixed the bug (which I had missed).


There was a test and a very useful golang error that literally explain what was wrong. The model tried implementing a solution, failed and when I pointed out the error most of them just rolled back the "solution"

What exact models were you using? And with what settings? 4.6 / 5.3 codex both with thinking / high modes?

minimax 2.5, kimi k2.5, codex 5.2, gemini 3 flash and pro, glm 4.7, devstral2 123b, etc.

Ok, thanks for the info

You are fighting straw men here. Any further discussion would be pointless.

Of course, n-1 wasn't good enough but n+1 will be singularity, just two more weeks my dudes, two more week... rinse and repeat ad infinitum

Like I said, pointless strawmanning.

You’ve once again made up a claim of “two more weeks” to argue against even though it’s not something anybody here has claimed.

If you feel the need to make an argument against claims that exist only in your head, maybe you can also keep the argument only in your head too?



Mind sharing the file?

Also, did you use Codex 5.3 Xhigh through the Codex CLI or Codex App?


I think you're being awfully generous to the average human.

Consider that a nonzero percent of otherwise competent adults can't write in their native language.

Consider that some tens of percentage of people wouldn't have the foggiest idea of how to calculate a square root let alone a cube.

Consider that well less than half of the population has ever seen code let alone produced functioning code.

The average adult is strikingly incapable of things that the average commenter here would consider basic skills.


> I purposefully added one too many wg.Done

What do you believe this shows? Sometimes I have difficulty finding bugs in other people's code when they do things in ways I would never use. I can rewrite their code so it works, but I can't necessarily quickly identify the specific bug.

Expecting a model to be perfect on every problem isn't reasonable. No known entity is able to do that. AIs aren't supposed to be gods.

(Well not yet anyway - there is as yet insufficient data for a meaningful answer.)


When companies claim that AI writes 90% of their code you can expect that such a system can find obvious issues. Expectations are really high when you see statements such as the ones coming from the CEOs of the AI labs. When those expectations fall short, it's expected to see such reactions. It's the same proportionality on both sides.

Post the file here

It's hard to evaluate "logic" and "math", since they're made up of many largely disparate things. But I think modern AI models are clearly better at coding, for example, than 99% of the population. If you asked 100 people at your local grocery store why your goroutine system was failing, do you think multiple of them would know the answer?

Meanwhile I've been using Kimi K2T and K2.5 to work in Go with a fair amount of concurrency and it's been able to write concurrent Go code and debug issues with goroutines equal to, and much more complex then, your issue, involving race conditions and more, just fine.

Projects:

https://github.com/alexispurslane/oxen

https://github.com/alexispurslane/org-lsp

(Note that org-lsp has a much improved version of the same indexer as oxen; the first was purely my design, the second I decided to listen to K2.5 more and it found a bunch of potential race conditions and fixed them)

shrug


It's basically bunch of people who see themselves as too smart to believe in God, instead they have just replaced it with AI and Singularity and attribute similar stuff to it eg. eternal life which is just heaven in religion. Amodei was hawking doubling of human lifespan to a bunch of boomers not too long ago. Ponce de León also went to search for the fountain of youth. It's a very common theme across human history. AI is just the new iteration where they mirror all their wishes and hopes.

You realize that science and technology does in fact produce medical breakthroughs that cure disease, right?

On the other hand, prayer doesn’t heal anybody and there’s no proof of supernatural beings.


The boomers he was talking to will be long underground before we will have any major cures for the diseases they will die from lmao. Maybe in 200 years?

Btw, so will you and I most likely.


> using the current models to help develop even smarter models.

That statement is plausible. However, extrapolating that to assert all the very different things which must be true to enable any form of 'singularity' would be a profound category error. There are many ways in which your first two sentences can be entirely true, while your third sentence requires a bunch of fundamental and extraordinary things to be true for which there is currently zero evidence.

Things like LLMs improving themselves in meaningful and novel ways and then iterating that self-improvement over multiple unattended generations in exponential runaway positive feedback loops resulting in tangible, real-world utility. All the impressive and rapid achievements in LLMs to date can still be true while major elements required for Foom-ish exponential take-off are still missing.


> I don’t think it’s hyperbolic to say that we may be only a single digit number of years away from the singularity.

We're back to singularity hype, but let's be real: benchmark gains are meaningless in the real world when the primary focus has shifted to gaming the metrics


Ok, here I am living in the real world finding these models have advanced incredibly over the past year for coding.

Benchmaxxing exists, but that’s not the only data point. It’s pretty clear that models are improving quickly in many domains in real world usage.


I use agentic tools daily and SOTA models have certainly improved a lot in the last year. But still in a linear, "they don't light my repo on fire as often when they get a confusing compiler error" kind of way, not a "I would now trust Opus 4.6 to respond to every work email and hands-off manage my banking and investment portfolio" kind of way.

They're still afflicted by the same fundamental problems that hold LLMs back from being a truly autonomous "drop-in human replacement" that would enable an entire new world of use cases.

And finally live up to the hype/dreams many of us couldn't help but feeling was right around in the corner circa 2022/3 when things really started taking off.


Yet even Anthropic has shown the downsides to using them. I don't think it is a given that improvements in models scores and capabilities + being able to churn code as fast as we can will lead us to a singularity, we'll need more than that.

I agree completely. I think we're in alignment with Elon Musk who says that AI will bypass coding entirely and create the binary directly.

It's going to be an exciting year.


There’s about as much sense doing this as there is in putting datacenters in orbit, i.e. it isn’t impossible, but literally any other option is better.

So, yes, for the past couple weeks it has felt that way to me. But it seems to come in fits and starts. Maybe that will stop being the case, but that's how it's felt to me for awhile.

Anthropic took the day off to do a $30B raise at a $380B valuation.

Most ridiculous valuation in the history of markets. Cant wait to watch these compsnies crash snd burn when people give up on the slot machine.

As usual don't take financial advice from HN folks!

not as if you could get in on it even if you wanted to

WeWork almost IPO’s at $50bn. It was also a nice crash and burn.

Why? They had $10+ billion arr run rate in 2025 trippeled from 2024 I mean 30x is a lot but also not insane at that growth rate right?

It's a 13 days old account with IHateAI handle.

Fast takeoff.

There's more compute now than before.

They are spending literal trillions. It may even accelerate

its cause of a chain of events.

Next week Chinese New year -> Chinese labs release all the models at once before it starts -> US labs respond with what they have already prepared

also note that even in US labs a large proportion of researchers and engineers are chinese and many celebrate the Chinese New Year too.

TLDR: Chinese New Year. Happy Horse year everybody!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: