Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ran whatever version Ollama downloaded on a 3070ti (laptop version). It's reasonably fast.

Probably was not r1, but one of the other models that got trained on r1, which apparently might still be quite good.



Ollama has been deliberately misrepresenting R1 distill models as "R1" for marketing purposes. A lot of "AI" influencers on social media are unabashedly doing the same. Ollama's default "R1" model is a 4-bit RTN quantized 7B model, which is nowhere close to the real R1 (a 671B parameter fp8 MoE).

https://www.reddit.com/r/LocalLLaMA/comments/1i8ifxd/ollama_...


Ollama is pretty clear about it, it's not like they are trying to deceive. You can also download the 671B model with Ollama, if you like.


no they are not, they intentionally remove every reference to this not being r1 from the cli and changed the names from the ones both Deepseek and Huggingface used.


Yet, I did not see a single issue made on the GitHub repository, and I just made one myself (https://github.com/ollama/ollama/issues/8698).


They used short strings for the names, which is very different from deception.

https://ollama.com/search

> DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.

Well I guess if you are in the Enterprise Java naming model you would expect something like "­Visitor­Model­Utils­List­Getter­Adapter­Interceptor­Message­Manager­Driven­Observer­Pool"

If you look at their API docs you will see:

    model: name of the model to push in the form of <namespace>/<model>:<tag>
I don't think there is any reason to jump to the conclusion it is some type of conspiracy here, just naming things based on a API that probably didn't think about distillation when they created it.


Yeah, they're so clear in fact that they call the distilled models "R1" in the url and everywhere on the page[1], instead of using the "DeepSeek-R1-Distill-" prefix, as DeepSeek themselves do[2].

[1]: https://ollama.com/library/deepseek-r1

[2]: https://github.com/deepseek-ai/DeepSeek-R1#deepseek-r1-disti...


I mean... yes. The DeepSeek announcement puts R1 right there in the name for those models. https://api-docs.deepseek.com/news/news250120

It's fairly clear that R1-Llama or R1-Qwen is a distill, and they're all coming directly from DeepSeek.

As an aside, at least the larger distilled models (I'm mostly running r1-llama-distill-70b) are definitely not the same thing as the base llama/qwen models. I'm getting better results locally, admittedly with the slower inference time as it does the whole "<think>" section.

Surprisingly - The content in the <think> section is actually quite useful on its own. If you're using the model to spitball or brainstorm, getting to see it do that process is just flat out useful. Sometimes more-so than the actual answer it finally produces.


I'm not too hip to all the LLM terminology, so maybe someone can make sense of this and see if it's r1 or something based on r1:

>>> /show info

  Model

    architecture        qwen2

    parameters          7.6B

    context length      131072

    embedding length    3584

    quantization        Q4_K_M


Hi Kye, I tried a version of this model to assess its capabilities.

I would recommend you to try to run the llama-based distill (same size, same quantization) that you can find here: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Llama-8...

It should take the same amount of memory as the one you currently have.

In my experience the Llama version performs much better at adhering to the prompt, understanding data in multiple languages, and going in-depth in its responses.


So... it's not R1 itself.

It's a model called Qwen, trained by Alibaba, which the DeepSeek team has used to "distill" knowledge from their own (100x bigger) model.

Think of it as forcing a junior Qwen to listen in while the smarter, PhD-level model was asked thousands of tough problems. It will acquire some of that knowledge and learn a lot of the reasoning process.

It cannot become exactly as smart, for the same reason a dog can learn lots of tricks from a human but not become human-level itself: it doesn't have enough neurons/capacity. Here, Qwen is a 7B model so it can't cram within 7 billion parameters as much data as you can cram into 671 billion. It can literally only learn 1% as much, BUT the distillation process is cleverly built and allows to focus on the "right" 1%.

Then this now-smarter Qwen is quantized. This means that we take its parameters (16-bit floats, super precise numbers) and truncate them to make them use less memory space. This also makes it less precise. Think of it as taking a super high resolution movie picture and compressing it into a small GIF. You lose some information, but the gist of it is preserved.

As a result of both of these transformations, you get something that can run on your local machine — but is a bit dumber than the original — because it's about 400 times smaller than the real deal.


"Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud."

And I think they, the DeepSeek team, finetunes Qwen 7b on DeepSeek. That is how I understood it.

Which apparently makes it quite good for a 7b model. But, again: if I understood it correctly, is still just qween and without the reasoning of DeepSeek.


In my application, code generation, the distilled DeepSeek models (7B to 70B) perform poorly. They imitate the reasoning of the r1 model, but their conclusions are not correct.

The real r1 model is great, better than o1, but the distilled models are not even as good as the base models that they were distilled from.


it’s a distill, it’s going to be much much worse than r1




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: