People complaining about quality here are missing the point, this is ONNX compatible inference engine written Rust, it just using 5MB SqueezeNet from 2016 for simplicity.
Question is, is it worth to invest time and effort into ONNX ?
Missing the point?
When the classifications are horribly bad, what is the point?
I can write a random phrase generator in FAR less than 5MB that would have the same overall accuracy as this.
No, the inference accuracy of the image classifier is dependent on the model used and this is a demo of the code executing the model in a browser with GPU acceleration not the model itself. You can plug and play any model in the onnx format e.g. https://github.com/onnx/models. As a comparison, complaining about the "abysmal quality" of the dummy model on display here is like saying blender is bad 3d modeling software after opening it for the first time because all it models is a blank cube.
Question is, is it worth to invest time and effort into ONNX ?
https://en.wikipedia.org/wiki/SqueezeNet
https://github.com/onnx/models?tab=readme-ov-file#image-clas...
here is the same model using tensorflowjs
https://hpssjellis.github.io/beginner-tensorflowjs-examples-...
https://t-shaped.nl/posts/running-ai-models-in-the-browser-u...