hah i use llms for that now too - "option-space 'link to <foo> lang'" and chat returns faster than the whole endeavor of opening google or putting stuff into the nav bar.
That’s my experience too. I don’t find Google or even Kagi faster for retrieving a link. All of the major LLMs can pull search results faster than I can through search websites, and then I’ve got a conversation where I can dive deeper if I want.
not obvious to me why this is an improvement of having the agent just update the rule files directly itself; i have it do that to my various AI-targeted readme files and it works great.
Graphiti MCP can recall more than just preferences and coding styles. Application specifications and evolution of these may be stored. For any non-trivial application, config files would likely be a misfit for this use ase.
i store those in my rules files - really all the knowledge i would pass onto another engineer (incl. AI). not sure i follow why you would avoid putting that into AI-readable files in your repo like i do now.
excellent post. i think the lesson is a good one: it's better to have less bugs than more bugs, and for some users, it would still have had an annoying bug.
Many ways to skin a cat. At least of this size (33k items). And at the size given, string up a database would have no advantages. Which I believe is the main point of the post! If you have a simple problem, use a simple solution.
If one had instead 1M items, the situation would be completely different.