Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh we actually fixed bugs! We fixed a few bugs in Gemma - see https://news.ycombinator.com/item?id=39671146, a gradient accumulation bug see https://news.ycombinator.com/item?id=41859037, Phi bugs, Llama bugs and more! See https://unsloth.ai/blog/reintroducing for more details!


What does your approach with dynamics weights has to do with those bugs? All those bugs seem uncorrelated to the technique.


Oh apologies I got confused - it's because when we calculate our dynamic quants, we have to do it on the fixed model!

For example in Phi 3 for example, the end of sentence token was wrong - if we use this, then our quants would be calibrated incorrectly, since chatting with the model will use the actual correct token.

Another is Llama 4 - https://github.com/ggml-org/llama.cpp/pull/12889 in which I fixed a RoPE issue - if we didn't fix it first, then again the calibration process would be incorrect.


Ok, this then goes to say that your approach doesn't work without applying whatever fixes to the vanilla models. What I'm trying to understand is the approach itself. Why does it and how does it work?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: