I have been thinking around solving this problem. I think one of the reasons some AI assistants shine vs others is how they can reduce the amount of context the LLM needs to work with using in-built tools. I think there's room to democratize these capabilities. One such capability is allowing the LLMs to directly work with the embeddings.
I wrote an MCP server directory-indexer[1] for this (self-hosted indexing mcp server). The goal being indexing any directories you want your AI to know about and gives the it MCP tools to search through the embeddings etc. While an agentic grep may be valuable, when working with tons of files with similar topics (like customer cases, technical docs), pre-processed embeddings have proven valuable for me. One reason I really like it is that it democratizes my data and documents: giving consistent results when working with different AI assistants - the alternative being vastly different results based on the in-built capabilities of the coding assistants. Another being having access to you "knowledge" from any project you're on. Though since this is selfhosted, I use nomic-embed-text for the embedding which has been sufficient for most use cases.
I got diagnosed with type 1 diabetes in Feb (technically LADA as it's late onset). I'm the first in my family with it so I had zero info on it. I tried getting some CGMs to use but most don't work in Kenya as they are geo-locked, and even apps for measuring carbs like CalorieKing are not available in my region. I was really frustrated with the tech ecosystem, and started working on My Sukari as a platform of free tools for diabetics.
I mostly get time to work on it on the weekends, so it's not yet ready for public use, but I've fully fleshed out one of the main features: Sugar Dashboard - A dashboard that visualises your Glucose data and helps you easier analyse it.
I'm really passionate about this and getting as much free, practical tools in the hands of patients (it honestly shouldn't be this hard to manage a disease)
I used to work for Lark. They raised $140mm to solve this problem and the best they could do was a non-ai chatbot that whined at users for not eating enough vegetables. The Lark app has 100% user drop off in 60 days and yet is still the silicon darling in the diabetes space.
Your platform has more science & more solution than 100 engineers in 3 years could produce. Keep at it and know with confidence that there is great value in what you are building. I know it's not your primary goal, but this will be lucrative if you keep going. I wish you a lot of luck, this is very cool!
All types.
The sugar dashboard allows import of data from different glucose apps, so its goal is to allow you visualize and analyze your data. I hope to integrate with cgms directly if I get some that allow it, and also source from Health connect. Sharing with specific people eg doctor is also a big ask that I'm working on.
The other WIP tools will be fore general health, not just diabetes, like carb counting from a photo via AI
Also recently diagnosed and just open sourced how I'm using AI to count carbs + get insulin doses [1]. Biggest issues I've seen to starting a legit business is not having sanctioned access to real-time blood sugar values (the APIs are all one hour behind), and dealing with the FDA. Love the idea of more tech-enabled diabetes management, good luck!
Love this! Thank you for sharing!
My backend is also in Go so this is a godsend. Will see how I can incorporate and let you know if I do!
> not having sanctioned access to real-time blood sugar values (the APIs are all one hour behind)
Ah, I didn't know this. One of the prospective tools I had in mind was real time alerting in case of drastic drops eg ping doctor or relative. I think will have to be limited to the apps/tools that do support realtime.
Technically there is unsanctioned access (someone reverse engineered the real-time APIs [1] which I ported to Go). I think the FDA does not want easy access to real-time values so that folks can't easily recommend insulin dosing without oversight. I am personally of the opinion that it is our right to have programmatic access to the real-time data and do with it what we please.
Would love to get in touch to hear more about your long-term vision for the project!
Insulin is lethal at higher dosages, so there is definitely an argument. My counter would be that someone who has to self administer this drug 5+ times a day should have the right to make determinations about dosing
yes, came across xdrip+ when looking for an android app I could use for Libre 2. I don't think Dexcoms are sold in Kenya, and even the Libres around are UK ones so you need 1) a VPN to setup, 2) an iphone. Both things being a challenge for most - I had to buy a my first ever iphone for this. Anyway, found xdrip a bit of a challenge to setup and a bit too technical to suggest to others; needs sideload and manually disabling a lot of Android defaults.
I had a lot of success with Juggluco[1] which is available on the Play Store and provides easy to use APIs to interact with supported CGM readings. Juggluco has an inbuilt xdrip web server but I haven't tried it yet.
Thanks!
I started out with a Nextjs full stack on Vercel, with db on Turso but ended up with a React frontend (next on vercel) and Go backend (selfhosted on vps).
Decided to port the backend to Go + postgres (on a Hetzner VPS), and retain the frontend on Nextjs - A lighter weight client, moving most of the compute to the backend API.
Few reasons for the port: I've had a lot more success/stability with Go backends, Turso pulled multi-tenant dbs which is what I mostly wanted them for, Nextjs is getting too hard for me.
Go backend is just the std lib (1.22+ server with the nice routing) - I mostly write all the lines in this
Frontend is textbook modern react: React19,next15,tailwind4
- AI mostly writes the code in the frontend (Cursor + Cline + sequentialthinking + context7 + my own custom "memory bank" process of breaking down tasks). AI is really, really good at this. I wrote this https://image-assets.etelej.com/ in literally 2 days 2 weekends ago with less than 10% of code being mine (mostly infra + hono APIs)
> the only info we do store is your session's cookies (and only if you're logged in).
If this truly is the only cookie you store, then you may not need the cookie banner, you can explain the cookie usage in your Privacy Policy.
For gdpr[1], strictly necessary cookies, like for login as you describe, do not require consent to be obtain as long as their usage is explained like in your privacy policy.
This brings back so many fond memories. I grew up in a rural part of Kenya where the internet was scarce and tech practically non-existent. I was interested in web dev and taught myself PHP using HTTrack to download the php manual site, then the cprogramming.com website. I remember writing these site contents onto a thick notebook to read in school. Cprogamming.com imho was my programming foundation as I treated it as programming gospel. That kid back then would be shocked at how far I've come, now a dev at MS. Not sure how I came across httrack back then but I am so glad I did
Logging in shows a dashboard asking you to implement it on your site.
That's not what I expect from a "demo" of a captcha.
A recommendation would be; please use mcaptcha on the mcaptcha account sign up page at the least. It'd then provide an instant user UX demo which is one of the major pains of captchas.
Or at least provide a link to the widget[1] for UX demo, where inspecting networks calls also shows the api calls in action
A tip for going over github code is to use the github.dev domain. This will help going over the code in an IDE (if you'd like that), that provides you language features, and easier navigation (eg Go To/ Peek Definitions, or References). It also helps avoid to clone a large repo to do the same locally.
In case you're trying to open the RFC links on the site, the IETF site no longer supports http and does not redirect to https, hence you see a 404 error. You can manually open https versions of the urls to visit them
that's right. "Ki" in this context is a prefix that's used to denote a language or the language spoken by a people ("ki ya lugha" - ki of language). This is exactly like English has language suffixes -ish, -ic, -iese, -ian; for example English, spanish, arabic, japanese, vietnamese. Where "ki" globally serves in Swahili, for example kiswahili (language for the swahili - people of the coast), kiingereza (english), kirusi (russian), and if you don't know the correct language used in a place/country it's valid to say kiukraine (language spoken by the people of ukraine)
As a Kenyan I found this read quite delightful, containing a lot I didn't know of the history of a language we speak.
> Nyerere personally translated two of William Shakespeare’s plays into Swahili to demonstrate the capacity of Swahili to bear the expressive weight of great literary works.
This is great! Can't wait to give it a try. Waiting for the Windows version though.
Does the electron GUI need to be open for it to fetch the data, or can it do that as a background service as well, for example if one just wanted an always up to date persistory.db to use directly? For example an api server and a custom browser extension consuming it, or are these also being considered for browserparrot
Heh, ah yes windows. Definitely want to support this. Will be a bigger lift than Linux, but definitely something we'd like to support (we both have windows boxes too so there's a bit of skin in the game).
Regarding the GUI, good question. The GUI does _not_ need to be open. There's a background process that periodically copies new history from the various sqlite databases into the BrowserParrot db.
In Activity Monitor this process appears as "core." I know, terribly ambiguous name. It was originally named for the Clojure namespace where that bit of code lives, but renaming the process to make it more clear what's running is on our backlog.
I feel it's not by accident you don't understand it. The documentation says that it's heavily inspired by TailwindCSS and calls itself utility-first. However if you've used TailwindCSS or most other utility-first frameworks, you'll find it's not the same (which is ok, but does not get clarified).
Pollen uses css variables as the utilities in "utility-first", not classes. So to better understand the docs, take a look at the usage of css variables.
Utility-first as used by the most other frameworks (like Tailwind), are based on AtomicCSS. The concept behind it being, as atomic implies, using single irreducible units for each class (a single style). This site[1] provides curated reading resources on the topic. While it is a new concept and still being defined, use of classes as the utilities is the more commonplace usage, hence if writing a framework that does not do that I feel one should make that clear.
This means that the Introducing Pollen page will be confusing by design, as it implies use of utility-first CSS but the first example shows a Boostrap-like `.button`. Most other examples on the site also show 1 style classes, also misleading on the classes actually being "atomic".
In summary, Pollen is a framework that uses css variables[2] as the utilities. I like what it's doing and watching to see it grow. To better understand the value provided I'd recommend this video[1] on CSS variables, and this framework being a rapid prototyping foundation using the same concepts.
I wrote an MCP server directory-indexer[1] for this (self-hosted indexing mcp server). The goal being indexing any directories you want your AI to know about and gives the it MCP tools to search through the embeddings etc. While an agentic grep may be valuable, when working with tons of files with similar topics (like customer cases, technical docs), pre-processed embeddings have proven valuable for me. One reason I really like it is that it democratizes my data and documents: giving consistent results when working with different AI assistants - the alternative being vastly different results based on the in-built capabilities of the coding assistants. Another being having access to you "knowledge" from any project you're on. Though since this is selfhosted, I use nomic-embed-text for the embedding which has been sufficient for most use cases.
[1] https://github.com/peteretelej/directory-indexer