Hacker News new | past | comments | ask | show | jobs | submit login

> I don’t know. After the model has been created (trained), I’m pretty sure that generating embeddings is much less computationally intensive than generating text.

An embedding is generated after a single pass through the model, so functionally it's the equivalent of generating a single token from an text generation model.






I might be wrong but aren't embedding models usually bidirectional and not causal, so the attention mechanism itself is more expensive.

It depends on the architecture (you very well can convert a decoder-only causal model to an embeddings model, e.g. Qwen/Mistral), but it is true the traditional embeddings models such as a BERT-based one are bidirectional, although unclear how much more compute that inherently requires.

Compare to ModernBERT, which uses more modern techniques and is still bidirectional, but it is very very speedy. https://huggingface.co/blog/modernbert


yes exactly

Will update the post to capture that interesting tidbit, thanks

And also, thanks for this post! https://minimaxir.com/2025/02/embeddings-parquet/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: