Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That summary's not wrong, it's just reductionist.

Fine-tuning makes sense when you need behavioral shifts (style, tone, bias) or are training on data unavailable at runtime.

RAG excels when you want factual augmentation without retraining the whole damn brain.

It's not either/or — it's about cost, latency, use case, and update cycles. But hey, binaries are easier to pitch on a slide.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: