Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People complaining about quality here are missing the point, this is ONNX compatible inference engine written Rust, it just using 5MB SqueezeNet from 2016 for simplicity.

Question is, is it worth to invest time and effort into ONNX ?

https://en.wikipedia.org/wiki/SqueezeNet

https://github.com/onnx/models?tab=readme-ov-file#image-clas...

here is the same model using tensorflowjs

https://hpssjellis.github.io/beginner-tensorflowjs-examples-...

https://t-shaped.nl/posts/running-ai-models-in-the-browser-u...



Seriously HN is feeling more like youtube comments section lately I don't know what happened.


Missing the point? When the classifications are horribly bad, what is the point? I can write a random phrase generator in FAR less than 5MB that would have the same overall accuracy as this.


The point you're missing that this is a just a demo for Wonnx, an inference runtime for ONNX models.

You can plug your own model into it, it's a general-purpose inference runtime that runs in the browser.


The point is you can swap in any ONNX model, instead of the 5mb shitty one


>"this is ONNX compatible inference engine written Rust"

Ah, the fact that it is written in Holy Rust instantly absolves abysmal quality.


No, the inference accuracy of the image classifier is dependent on the model used and this is a demo of the code executing the model in a browser with GPU acceleration not the model itself. You can plug and play any model in the onnx format e.g. https://github.com/onnx/models. As a comparison, complaining about the "abysmal quality" of the dummy model on display here is like saying blender is bad 3d modeling software after opening it for the first time because all it models is a blank cube.


No the point is it's ONNX and can be pointed to any ONNX model, wow people have really gotten dumb on HN lately




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: