Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

glm 4.5 is not a coding model


It may not be code-only, but it was trained extensively for coding:

> Our base model undergoes several training stages. During pre-training, the model is first trained on 15T tokens of a general pre-training corpus, followed by 7T tokens of a code & reasoning corpus. After pre-training, we introduce additional stages to further enhance the model's performance on key downstream domains.

From my notes here: https://simonwillison.net/2025/Jul/28/glm-45/


yes, all reasoning models currently are, but it’s not like ds coder or qwen coder


I don't see how the training process for GLM-4.5 is materially different from that used for Qwen3-235B-A22B-Instruct-2507 - they both did a ton of extra reinforcement learning training related to code.

Am I missing something?


I think the primary thing you're missing is that Qwen3-235B-A22B-Instruct-2507 != Qwen3-Coder-480B-A35B-Instruct. And the difference there is that while both do tons of code RL, in one they do not monitor performance on anything else for forgetting/regression and focus fully on code post-training pipelines and it is not meant for other tasks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: