When it’s more feasible to do inference on the client (browser or desktop) I can see SLMs popping up more common in production.
When it’s more feasible to do inference on the client (browser or desktop) I can see SLMs popping up more common in production.