Sure. I will add them on github later today. The repo is currently in a very messy state. I would like to clean it and provide detailled assembly steps but I have to much work currently. Hopefully I can do this in a couple of months.
I was looking for Astral’s future plans to make money. Simonw already answered in another post [1] tldr - keep tooling open and free forever, build enterprise services (like a private package registry) on top.
Good thing to highlight. I'm not sure I'd bet on the game plan, but uv is an incredibly useful tool which I also wouldn't have bet on. Hopefully Simonw is right, and Astral can maintain as is.
Well, that's basically the core of Anaconda, and it's working for them.
That said, I've checked Anaconda's site, and while it used to be "Anaconda [Python] Commercial Distribution", "On-Prem repositories", "Cloud notebooks and training"... during the last year they've changed their product name to "Anaconda AI Platform", and all it's about "The operating system for AI", "Tools for the Complete AI Lifecycle". Eeeeh, no thanks.
not sure i hold out much long term hope for them either. both of these companies can eventually make money in a way that isnt shady - just not enough money to satisfy their VCs.
I’ve hit similar problems with their Ruby gRPC library.
The counter example is the language Go. The team running Go has put considerable care and attention into making this project welcoming for developers to contribute, while still adhering to Google code contribution requirements. Building for source is straightforward and iirc it’s one of the easier cross compilers to setup.
I've been thinking about other algorithms as well, like range-based. We can definitely override the algorithm for hot shards. Ideally, we match what Postgres does with partitions: range, hash, and list. That way, we can shard both inside Postgres (e.g. with postgres_fdw) and at the proxy.
I think the first step is to add as much monitoring as possible to catch the problem early. There is also the dry-run mode [1] that you can try before sharding at all to make sure your traffic would be evenly split given a choice of sharding key.
I love a good pairing ladder. While there is no absolute good measure of productivity, a suite of thoughts observation can provide at least tripwires.
One major problem with tools that ask crafters to do data entry to show their value is that best people are most likely to refuse. You really need to focus on tooling to help capture what’s going on without toil.
For example, if your folks are remote and they use software to aid in pairing (eg Tuple), script the system to log the tuple sessions and perhaps even capture the pairing in the commits made together.
This can be used as an input to bring visible to be best leaders in your org.
So often, these tools lack usability because they’re built generically any use case. Here, it was designed for your wife’s bakery.
The process to build a site/app like this will only get easier and more defect-free over the coming months.
reply