One recent example of boilerplate for me is I’ve been writing dbt models and I get it to write the schema.yml file for me based on the sql.
It’s basically just a translation, but with dozens of tables, each with dozens of columns it gets tedious pretty fast.
If given other files from the project as context it’s also pretty good at generating the table and column descriptions for documentation, which I would probably just not write at all if doing it by hand.
This resonates with my experience of using LLMs to build tooling.
I have a repo with several libraries where i need error codes to be globally unique, as well as adhere to a set of prefixes attributed to each library. This was enforced by carefully reviewing any commits that touched the error code headers.
I’ve had a ticket open for years to write a tool to do this and the general idea of the tool’s architecture but never got around to implementing it.
I used the LLMs to research design alternatives (clang tools, tree sitter, etc) and eventually implement a tree sitter based python tool that: given a json config of the library prefixes, checks they all adhere and that there are no duplicate error codes within a library.
This would probably have taken me at least a few days to do on my own (or probably would just sit in the backlog forever), took about 3 hours.
The ROI on those 3 hours is immense. Runs in milliseconds. No capitalized instructions in AGENTS.md begging models to behave. And you can refine it anytime to cover more cases!
> I suppose the argument is that while the robot itself might not have run over anyone, it might have caused someone else on the road to do it.
And I repeat, that's a contrived enough scenario that I think you need to come to the table with numbers and evidence if you want to make it. Counting crashes has been The Way Transportation Safety Has Been Done for the better part of a century now and you don't just change methodology midstream because you're afraid of the Robot Overlord in the driver's seat.
Science always has a place at the table. Ludditism does not.
I wouldn't say its contrived, but I agree its important to take such questions and back them up with data.
My question is open in that we don't really HAVE data to measure that statement in any meaningful way. The proper response is "that could be valid, we need to find a way to measure it".
Resorting to calling me a luddite because I question whether a metric is really an accurate measure of success (one that I apply to HUMAN drivers as an example first...) really doesn't reflect any sort of scientific approach or method I'm aware of, but feel free to point me to references.
> The proper response is "that could be valid, we need to find a way to measure it".
With all respect, no. You don't treat every possible hypothesis as potentially valid. That's conspiracy logic. Valid ideas are testable ones. If you're not measuring it, then the "proper" response is to keep silent, or propose an actual measurement.
And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.
>You don't treat every possible hypothesis as potentially valid.
Wrong, you consider and reject hypothesis if we're being specific about the scientific method. In this case, this is a testable question that could be measured but from common metrics isn't accurately measured for humans to compare to. There is no valid rejection of the hypothesis without more data.
The utility of the hypothesis and the work required is one of many things considered and another discussion.
But actually considering and discussing them IS the scientific and rational method. Your knee-jerk reactions are the closest thing to ludditism in this whole conversation.
>And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.
Again wrong. In almost every case the correct first questions are "are we measuring the right thing", again if we are talking about engineering and science, that's always valid and should ALWAYS be considered. I also never said we should IGNORE crashes, I asked if its the BEST metric for success on its own.
And for your third incorrect point
>That is pretty much the definition of ludditism.
Obviously missed my point in every posts, including the one above. Whether "crashes" is the best metric is being applied to humans and technology, there is no anti-technology going on here.
Your emotional reaction to someone questioning something you obviously care about seems to have shut down your logical brain. Take a deep breath and just stop digging.
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
Also please note my usage is pretty basic as I usually have the same machine ported to C# and Python so I tend to use the smallest subset of functionality supported by all the libraries in each language.
I didn’t find a good way to handle unexpected events with SML. I know there is supposed to be a special catch all event which can do that, but that’s only for the case the event is not used in the entire FSM, which makes it pretty useless. Thinking of rolling my own based on std::variant now.