Moondream 2 has been very useful for me: I've been using it to automatically label object detection datasets for novel classes and distill an orders of magnitude smaller but similarly accurate CNN.
One oddity is that I haven't seen the claimed improvements beyond the 2025-01-09 tag - subsequent releases improve recall but degrade precision pretty significantly. It'd be amazing if object detection VLMs like this reported class confidences to better address this issue. That said, having a dedicated object detection API is very nice and absent from other models/wrappers AFAIK.
Looking forward to Moondream 3 post-inference optimizations. Congrats to the team. The founder Vik is a great follow on X if that's your thing.
Really impressive performance from the Moondream model, but looking at the results from the big 3 labs, it's absolutely wild how poorly Claude and OpenAI perform. Gemini isn't as good as Moondream, but it's clearly the only one that's even half way decent at these vision tasks. I didn't realize how big a performance gap there was.
Funnily enough, Gemini is also the only one able to read a D20. ChatGPT consistently gets it wrong, and Claude mostly argues it can't read the face of the die that's facing up because it's obstructed (it's not lol).
I'm not sure why they haven't been acquired yet by any of the big ones, since clearly Moondream is pretty good! Definitely seems like something Anthropic/OpenAI/whoever would want to fold into their platforms and such. Everyone involved in creating it should probably be swimming in money and visual use cases for LLMs should become far less useless with the reach of the big orgs.
Using moondream2 at paper.design to describe user uploaded images (for automatic labels in the layer tree). It's incredible, super fast and accurate. Excited to try out 3 :)
The 'point' skill is trained on a ton of UI data; we've heard of a lot of people using it in combination with a bigger driver model for UI automation. We are also planning on post-training it to work end-to-end for this in an agentic setting before the final release -- this was one of the main reasons we increased the model's context length.
Re: chart understanding, there are a lot of different types of charts out there but it does fairly well! We posted benchmarks for ChartQA in the blog but it's on par with GPT5* and slightly better than Gemini 2.5 Flash.
* To be fair to GPT5, it's going to work well on many more types of charts/graphs than Moondream. To be fair to Moondream, GPT5 isn't really well suited to deploy in a lot of vision AI applications due to cost/latency.
This looks amazing. I'm a big fan of Gemini for bounding box operations, the idea that a 9B model could outperform it is incredibly exciting!
I noticed that Moondream 2 was Apache 2 licensed but the 3 preview is currently BSL ("You can’t (without a deal): offer the model’s functionality to anyone outside your organization—e.g., an external API, or managed hosting for customers") - is that a permanent change to your licensing policies?
Spent 5 minutes trying to get basic pricing info for Moondream cloud. Seems it simply does not exist (or at least not until you've actually signed up?). There's 5,000 free requests but I need to sense-check the pricing as viable as step 0 of evaluating - long before hooking it up to an app.
We are looking to launch our cloud very soon. We are still optimizing our inference to get you the best pricing we can offer. Follow @moondreamai on X if you want your ear to the ground for our launch!
The MoE architecture choice here is particularly interesting - the ability to keep only 2B parameters active while maintaining 8B model performance is a game-changer for edge deployment. I've been deploying vision models in production environments where latency is critical, and this sparse activation approach could solve the inference cost problem that's been limiting adoption of larger VLMs. The chart understanding capabilities mentioned look promising for automated document analysis workflows. Has anyone tested the model's consistency across different image qualities or lighting conditions? That's often where smaller models struggle compared to frontier ones.
My understanding is that, while all 8B are loaded into memory, for each token inference step only 2B are selected and used - so tokens are produced faster because there is less computation needed.
Hoping someone will correct me if that's not the right mental model!
Could you clarify whether the 2B active parameter concept refers to per-token inference and how this scales with context length? Specifically how MoE affects activation during inference and any practical implications for latency.
Would be interesting to see how it scores on COCO or Object356 dataset object detection (even if I know will be slower than dedicated object detection model)
Since there's no quantized version available at the moment, you'll need ~20 GB of memory for the weights plus some extra for the KV cache. CPU with 32 GB RAM will be the cheapest and still reasonably fast given the relatively small number of activated parameters.
I don't even know what a "quantized version" is, but I was expecting answers about NVIDIA graphics cards and their memory. My computer has 24GB of memory, but I'll go for 64GB to run this locally on a new computer.
It's ability to process large volumes of images with low active parameters makes it a significant advancement for edge devices. However, scaling these models to production environments often introduces security challenges, including bot floods targeting inference APIs and adversarial inputs that mimic legitimate queries to disrupt detections.
it's honestly really good. Big fan of that team, they are really practical and have been producing really useful software and sharing all their learnings online.
One oddity is that I haven't seen the claimed improvements beyond the 2025-01-09 tag - subsequent releases improve recall but degrade precision pretty significantly. It'd be amazing if object detection VLMs like this reported class confidences to better address this issue. That said, having a dedicated object detection API is very nice and absent from other models/wrappers AFAIK.
Looking forward to Moondream 3 post-inference optimizations. Congrats to the team. The founder Vik is a great follow on X if that's your thing.