Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't match how things are run in production these days. As a vendor, you need to target the customer's environment as closely as possible. Even if it's theoretically feasible to serve off a single machine, you should have a cloud-native setup ready to go.

In principle you could totally run this on a single bare-metal node, but most will not be doing that in practice.



> you should have a cloud-native setup ready to go

why is storing the file as a FAISS/LanceDB on-disk vector store not "cloud native"? I am running this setup in production across dozens of nodes, we migrated all of our infrastructure off Pinecone towards this solution and have seen 10x drop in latency, and the cost improvements have been dramatic (from paid, to totally free).

I have a bit of an axe to grind in the vector DB space, it feels like the industry has gaslit developers over the last year or so into thinking SAAS is necessary for vector retrieval, when low latency on-disk KNN across vectors is a solved problem.


I totally agree that latency of this solution leaves a lot of room to improvement. But that's totally besides the point of the article, which is that people can get a no-cost semantic search for their personal website using those services. They can also use other solutions, of course.

Also I'm experimenting in further integrating things to reduce latency and most likely will publish another article within the month. Stay tuned.

Finally I somewhat agree that many of the players in the vector DB space try to push their cloud offerings. Which is fine, how else should they make money? And if latency matters that much to you, Qdrant offers custom deployments, too. I believe running Qdrant locally will handily beat your LanceDB solution perf-wise unless you're talking about less than 100k entries. We have both docker containers and release binaries for all major OSes, why not give it a try?


That's fantastic! Not all organizations (arguably most) are running their tech/infrastructure so well and competently. For a lot of organizations, it makes sense to externalize anything that's not a core competency directly related to their business. For them, less infra and less code is "better". Depending on how the accounting is done it might also be better to have a "vendor" expense rather than "internal team" expense which requires staffing.

All that is to say, maybe there's a lot of money in the SAAS/big cloud space, and customers willing to run their own setup that requires tuning might not be willing to hand them large sums of money? Just theorizing here!

Oh also "cloud native" is like a marketing term vaguely saying "you can hook this into other cloud stuff" and it works with K8s/whatever cloud thingy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: