Yeah, source controlling the configurations, then cross-collaboration on teams isn't supported, but something interesting to explore especially with engineers and PMs collaborating together on prompting.
Yeah, the post doesn't go into quorum decision making with the "monitoring node", which helps either enforce eventual or strong consistency. I've found that you typically want minimum 3 nodes + a monitoring node (or could be client side) to have a read quorum (at least 2 nodes have consistent data).
This is great! Out of curiosity, what's the difference between choosing a dedicated vector database vs. a traditional database with vector indices (e.g. pgvector with postgres?
oh yeah this is a great question, I get this a lot when I do my talks about RAG stuff
the way I see it is if you have a small amount of data (<10,000 vectors) then it's all the same and you should stick with the technology you are most familiar with
once you get more than that, you may want to consider a vector database
the reason that vector databases exist is because vector search is a highly compute intensive task, in regular database settings, you almost never have to run compute, the database is primarily looking to do an exact match
however, because vector search is predicated on the idea of finding similar vectors, and because exact vector matches are unlikely, you find yourself in the situation of having to optimize that
if you're building on a sql/nosql database you find yourself having to manage indexing, computing distance metrics, and load balancing
pgvector manages much of that for you, but due to the structure of SQL, it doesn't manage it in a very efficient manner - because it wasn't built to, an extra system needs to be built on top
as many experienced software engineers will tell you, adding complexity doesn't necessarily make something better, and adds more points of failure
purpose built vector databases like the ones in the article (eg milvus, chroma, weaviate) are built with this compute challenge in mind, and this becomes useful as the amount of data you have expands
I'd also add that a huge use for LLMs and vectors in the enterprise is to build queries against production data. Keeping the vector DB external to your RDBMS or other production data store is a unique chance to amplify performance without excess latching and other performance hits against the same database you count on for day to day business. Like external super smart indexes.
The intuition seems reversed for Math as a Habit. Doing math problems, then tying a reward to it to reinforce a habit is one approach.
I feel like given the broad nature of math, another approach might be to just integrate the math concepts and learning into already things that kids enjoy doing. Whether it's tying it into video games, sports, etc.
Surely within days. ComfyUI’s maintainer said he is readying the node for release perhaps by this weekend. The Stable Cascade model is otherwise known as Würschten v3 and has been floating around the open source generative image space since fall.