That is why a minimal framework[1] that allows me to understand the core immutable loop, but to quickly experiment with all these imperative concepts is invaluable.
I was able to try Beads[1] quickly with my framework and decided I like it enough to keep it. If I don't like it, just drop it, they're composable.
I see, thanks for channeling the GP! Yeah, like you say, I just don't think getting the tool call template right is really a problem anymore, at least with the big-labs SotA models that most of us use for coding agents. Claude Sonnet, Gemini, GPT-5 and friends have been heavily heavily RL-ed into being really good at tool calls, and it's all built into the providers' apis now so you never even see the magic where the tool call is parsed out of the raw response. To be honest, when I first read about tools calls with LLMs I thought, "that'll never work reliably, it'll mess up the syntax sometimes." But in practice, it does work. (Or, to be more precise, if the LLM ever does mess up the grammar, you never know because it's able to seamlessly retry and correct without it ever being visible at the user-facing api layer.) Claude Code plugged into Sonnet (or even Haiku) might do hundreds of tool calls in an hour of work without missing a beat. One of the many surprises of the last few years.
> Call it what you want, you can write it in 100 lines of Python. I encourage every programmer I talk to who is remotely curious about LLMs to try that. It is a lightbulb moment.
Definitely want to try this out. Any resources / etc. on getting started?
Yes, it's a mess, and there will be a lot of churn, you're not wrong, but there are foundational concepts underneath it all that you can learn and then it's easy to fit insert-new-feature into your mental model. (Or you can just ignore the new features, and roll your own tools. Some people here do that with a lot of success.)
The foundational mental model to get the hang of is really just:
* An LLM
* ...called in a loop
* ...maintaining a history of stuff it's done in the session (the "context")
* ...with access to tool calls to do things. Like, read files, write files, call bash, etc.
Some people call this "the agentic loop." Call it what you want, you can write it in 100 lines of Python. I encourage every programmer I talk to who is remotely curious about LLMs to try that. It is a lightbulb moment.
Once you've written your own basic agent, if a new tool comes along, you can easily demystify it by thinking about how you'd implement it yourself. For example, Claude Skills are really just:
1) Skills are just a bunch of files with instructions for the LLM in them.
2) Search for the available "skills" on startup and put all the short descriptions into the context so the LLM knows about them.
3) Also tell the LLM how to "use" a skill. Claude just uses the `bash` tool for that.
4) When Claude wants to use a skill, it uses the "call bash" tool to read in the skill files, then does the thing described in them.
and that's more or less it, glossing over a lot of things that are important but not foundational like ensuring granular tool permissions, etc.
The second-to-last post[0] talks about how they decided to migrate their stack from Ruby on Rails to Haskell, and are now in the seventh (!) year of that migration.
Creator of JustSketchMe here! I was very surprised to see this on HackerNews this morning. Very cool to see this doing the rounds 6 years into running this :)
If you liked that video you'll like this one too, which explains that mechanical and electrical parallel but in the other direction.
Prof. Malcolm C. Smith had an electrical circuit and made its mechanical equivalent. His invention (the inerter) gave the F1's McLaren team an advantage in 2007.
This is something that I've wondered about when it comes to things like self driving cars, and the difference between good and bad drivers.
When I'm driving I'm constantly making predictions about the future state of the highway and acting on that. For example before most people change lanes, even without using a signal they'll look and slightly move the car in that direction, up to a full second before they actually do it. Or I see two cars that are going to end up in a conflict state (trying to take the same location on the highway) so I pivot away from them and the recovery they will have to make.
Self driving cars for all I know are reactionary. They can't pick up on these things beforehand at this time and preemptively put them self in a safer position. Bad/distracted/unaware drivers are not only reactionary, they'll have a much slower reaction time than a self driving car.
Zed looks and feels amazing to use. I test-drove it for a bit on my linux system, and the feel of it is difficult to convey to those who have not tried it yet. It's easy to overlook the significance of gpu accelerated editor - but I promise you, use it for a bit and you'll be sold.
The only feature that is preventing me from switching to Zed is the current lack of DevContainer support[1]. After investing a significant amount of time perfecting a devcontainer (custom fat image with all tools/libs + config), it would be a large step backwards to go back to installing everything locally.
There's a lot of eyes on this feature, so I'm hopeful it will be implemented in the future.
I explain it to my peers as "exploiting Cunningham's Law[0] with thyself"
I'll stare blankly at a blank screen/file for hours seeking inspiration, but the moment I have something to criticise I am immediately productive and can focus.
Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.
This right here is the single biggest win for coding agents. I see and directionally agree with all the concerns people have about maintainability and sprawl in AI-mediated projects. I don't care, though, because the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. It's getting to that golden moment that constitutes 80% of what's costly about programming for me.
This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.
I want to make a few points to help clarify some of the choices and why I made them. This is very helpful and I appreciate all the comments as it highlights how some things are clear in our head but we don't end up sharing that with anyone reading. So:
1. I looked at AdGuardHome but I preferred PiHole because I found its documentation a bit more helpful for my purpose (the Unbound sample, the Wireguard setup, etc)
2. I saw the docker compose package, but I wanted something that runs at the OS level. There are docker packages for Wireguard too and I had also a look at Mistborn (https://gitlab.com/cyber5k/mistborn)
3. The VPN is the main thing I wanted setup to reach resources on my home network, adblocking and DNS came a bit later, so you can run this without a VPN, but its central for my setup.
4. I really wanted this setup at the OS level and to hopefully learn more about the whole process.
While it's impossible to completely avoid beliefs that are effectively from authority, we do have systems such as science (scientific peer review), capitalism (economic freedom) that give credibility to certain ideas or patterns. Not moral credibility, but effective consensus that is relatively stable. Sure there are disruptions -- scientific revolutions, economic creative destruction -- but those are typically viewed as having been good things after the fact.
Moral authority (elders, traditions, cultural norms) can be helpful in some ways, but they are much more crude and error prone. Respected elders can prey on children, long-held traditions can be oppressive and even harmful (genital mutilation, circumcision). Cultural norms can create significant social costs (women keeping house rather than starting companies or curing diseases, men spending weekends bored out of the social pressure to pretend to like various sports, ec.)
When the average person flips on a light switch they believe they know why the light turned on -- electricity! wire! -- but few could explain it much more specifically than that and could not ELI5 it. So in a sense they are expressing a faith-based belief. But most people can tell you who does understand it and know how to find more detailed explanations if they care to learn. This is quite unlike religious faith/tradition which demands that people profess beliefs that are impossible. When you think about it, the word faith means nearly the same thing as the word doubt only with a different connotation.
Please maintain your professional social contacts!
It's fine not to have a native interest in that. But it's part of the job. No blog or bragging needed. No Linkedin needed even, if you really don't want that one. Catalog and maintain some minimal contact with these people who have seen your work: bosses, colleagues, juniors, vendors, consultants, anyone. Expose a few EXTRA people now and then, so they can see your work. This way, when you need to find a new job after many years hidden in just one department, you will have all these people who moved - sometimes frequently - and only wished they could have taken you with them at the time.
And that has nothing to do with "10x" - no matter the skill level you think you have, you'll be better off with a wide set of people who know you. "Know you" to any degree - often people will prefer you to going through the circus of interviewing for pretty random results.
If you are fascinated by this sort of thing, In US architectural practice, Architectural Graphic Standard has been the standard for the sizes of things for nearly a century. Even the old "Student Edition" is a rabbit hole book.
I actually made money accidentally with a BahnCard subscription.
I canceled my BahnCard 50 within the first 14 days (which they have to conform to due to laws) via email (btw: before the starting date so I did not use it). They confirmed and send me back 100Euros too much money. Why 100Euro? I had originally used a coupon code that saved me 100Euro. Good guy as I am I told them via mail that they sent me too much money back. What did they say? No, can't be, Bahncard50 costs are the higher amount, the banking statement that I sent them must be of something else.
I was like... alright, no sense in arguing with them anymore.
I'm the author of the post. Do you have any knowledge about how refraction can vary? I was wondering about calculating the world twice, once with a lower refraction bound and then again with an upper.
"Viewsheds" of any location can be calculated and matched with photographs using "GeoImageViewer", an application I wrote a couple of years ago. Any feature in the image can be interactively identified in a mapview and vice versa, including the boundary of the viewshed. As has been mentioned in the comments, it is essential to include atmospheric refraction in the calculation, at least for distances above ~100km.
I have a few of these posts I've written coming out over the next few months that I want people to discuss. Would you prefer I add a disclaimer at the top? Easy enough to add
I'm not sure what you're talking about. Any app compiled using LLVM 17 (2023) can use SME directly and any app that uses Apple's Accelerate framework automatically takes advantage of SME since iOS 18/macOS 15 last year.
Before you assume that LDL isn’t a good biomarker, read the entire article. Specifically this section:
> Why? In some ways, cholesterol has become a victim of its own success. We now screen the whole population for high cholesterol, give statins to those with high LDL (or ApoB), and so then the majority of people who end up having heart attacks have lower cholesterol than they would naturally have
In other words, in the study population patients who would have had high LDL were likely to be on statins. The had a lower measured LDL value even though they might still be consuming a poor diet and living an unhealthy lifestyle, for example. Statins don't fix everything about poor diet and lifestyle, but they do help with cholesterol.
So don’t go throwing LDL out yet. It’s still the best measure we have, though you should obviously know that LDL measured while on statins is lower than it would be normally.
The headline, therefore, is somewhat clickbait from a company trying to sell these tests to you outside of your insurance. I recommend checking your insurance to see if the tests would be covered before you go the self-pay route.
Edit to add: If your doctor won't order hs-CRP for some reason, you can order it from sites like privatemdlabs.com for $50 (less if you take their 25% off coupon).
I made an announcement post here: https://hyperflask.dev/blog/2025/10/14/launch-annoncement/
I love to hear feedback!