Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs have been more useful on the encoder side than the decoder side in my experience. Creating embeddings is useful in all sorts of ways. Specifically, if your business involves any sort of recommendation system, embeddings are useful.

On the decoder side, the use cases are more subtle. Rarely do you want your product to be the raw output of a statistical language model. More often the output is an enabler of other things your business is doing. For example, you can use it to develop “doc queries”, which are queries that a doc might be surfaced under. This can help with cold start issues by supplementing existing doc info.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: