Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah I saw that, but that could cover a pretty wide range and it's not clear to me whether that relies on preloading a model.



> At inference time Magika uses Onnx as an inference engine to ensure files are identified in a matter of milliseconds, almost as fast as a non-AI tool even on CPU.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: