Hacker Newsnew | past | comments | ask | show | jobs | submit | indiosmo's commentslogin

Andrei Alexandrescu comes to mind as someone who does this often. Plenty of talks on youtube.


The co-creator of D 2.0? That guy is a legend.


One recent example of boilerplate for me is I’ve been writing dbt models and I get it to write the schema.yml file for me based on the sql.

It’s basically just a translation, but with dozens of tables, each with dozens of columns it gets tedious pretty fast.

If given other files from the project as context it’s also pretty good at generating the table and column descriptions for documentation, which I would probably just not write at all if doing it by hand.


This resonates with my experience of using LLMs to build tooling.

I have a repo with several libraries where i need error codes to be globally unique, as well as adhere to a set of prefixes attributed to each library. This was enforced by carefully reviewing any commits that touched the error code headers.

I’ve had a ticket open for years to write a tool to do this and the general idea of the tool’s architecture but never got around to implementing it.

I used the LLMs to research design alternatives (clang tools, tree sitter, etc) and eventually implement a tree sitter based python tool that: given a json config of the library prefixes, checks they all adhere and that there are no duplicate error codes within a library.

This would probably have taken me at least a few days to do on my own (or probably would just sit in the backlog forever), took about 3 hours.


The ROI on those 3 hours is immense. Runs in milliseconds. No capitalized instructions in AGENTS.md begging models to behave. And you can refine it anytime to cover more cases!


bpftrace was instrumental in helping me diagnose a latency spike on a hot path.

I call into a vendor library that eventually calls into Solarflare's TCPDirect API to send packets to the network.

This call usually stays under 5us and has very little variance, but I was seeing the occasional 30ms or 60ms outlier.

I attached probes to track the latency of each call down the stack and it became quite easy to pin down the specific function.

Eventually tracking it down to a single assignment to a variable that was mmapped to a file, and getting spikes due to page cache writeback.

After figuring it out I found these two articles that go into more detail on the problem.

https://rigtorp.se/virtual-memory/

https://tungdam.medium.com/our-lessons-on-linux-writeback-do...

Here's one of the probes producing a histogram of the latency of the call to the TCPDirect function, which was surprising in how simple it was.

  sudo bpftrace -p "$PID" -e '
    // —— TCPDirect ——
    uprobe:/lib/x86_64-linux-gnu/libonload_zf.so.1:zft_send_single {
      @zf_start[tid] = nsecs;
    }
    uretprobe:/lib/x86_64-linux-gnu/libonload_zf.so.1:zft_send_single /@zf_start[tid]/ {
      $lat = nsecs - @zf_start[tid];
      @h_zft = hist($lat / 1000);
      delete(@zf_start[tid]);
    }
  '



I suppose the argument is that while the robot itself might not have run over anyone, it might have caused someone else on the road to do it.

So if we're just measuring how many crashes the robot has been involved in, we can't account for how many crashes the robot indirectly caused.


> I suppose the argument is that while the robot itself might not have run over anyone, it might have caused someone else on the road to do it.

And I repeat, that's a contrived enough scenario that I think you need to come to the table with numbers and evidence if you want to make it. Counting crashes has been The Way Transportation Safety Has Been Done for the better part of a century now and you don't just change methodology midstream because you're afraid of the Robot Overlord in the driver's seat.

Science always has a place at the table. Ludditism does not.


I wouldn't say its contrived, but I agree its important to take such questions and back them up with data.

My question is open in that we don't really HAVE data to measure that statement in any meaningful way. The proper response is "that could be valid, we need to find a way to measure it".

Resorting to calling me a luddite because I question whether a metric is really an accurate measure of success (one that I apply to HUMAN drivers as an example first...) really doesn't reflect any sort of scientific approach or method I'm aware of, but feel free to point me to references.


> The proper response is "that could be valid, we need to find a way to measure it".

With all respect, no. You don't treat every possible hypothesis as potentially valid. That's conspiracy logic. Valid ideas are testable ones. If you're not measuring it, then the "proper" response is to keep silent, or propose an actual measurement.

And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.


>You don't treat every possible hypothesis as potentially valid.

Wrong, you consider and reject hypothesis if we're being specific about the scientific method. In this case, this is a testable question that could be measured but from common metrics isn't accurately measured for humans to compare to. There is no valid rejection of the hypothesis without more data.

The utility of the hypothesis and the work required is one of many things considered and another discussion.

But actually considering and discussing them IS the scientific and rational method. Your knee-jerk reactions are the closest thing to ludditism in this whole conversation.

>And likewise a proper response is emphatically not to respond to an actual measurement within a commonly accepted paradigm (c.f. the linked headline above) with a response like "well this may not be measuring the right thing so I'm going to ignore it". That is pretty much the definition of ludditism.

Again wrong. In almost every case the correct first questions are "are we measuring the right thing", again if we are talking about engineering and science, that's always valid and should ALWAYS be considered. I also never said we should IGNORE crashes, I asked if its the BEST metric for success on its own.

And for your third incorrect point

>That is pretty much the definition of ludditism.

Obviously missed my point in every posts, including the one above. Whether "crashes" is the best metric is being applied to humans and technology, there is no anti-technology going on here.

Your emotional reaction to someone questioning something you obviously care about seems to have shut down your logical brain. Take a deep breath and just stop digging.


In addition to the radio signal being faster as noted by the other commenters, for long distances the radiowave is actually the shorter route.

If you take one of the routes in the article, Chicago to Sao Paulo.

The distance is about 8,400km in a straight line.

According to https://en.wikipedia.org/wiki/Skywave a single shortwave hop can reach 3,500km, so 3 hops are required, or about 30ms.

The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.

Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.

The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.


I have been using SML in production for a little over a year now without issues, mostly for protocol machines.

I agree the documentation is not great though, you kind of have to go over the example code and try to figure things out on your own.

Using embedded data for keeping state (e.g. counters, etc; see https://boost-experimental.github.io/sml/examples/index.html...) along with injecting std::functions to use as actions is pretty powerful.

Here's a basic example I wrote some months ago showing a hierarchical machine (https://gist.github.com/indiosmo/08ab24181770125d5a2448d27f6...).

Also please note my usage is pretty basic as I usually have the same machine ported to C# and Python so I tend to use the smallest subset of functionality supported by all the libraries in each language.


I didn’t find a good way to handle unexpected events with SML. I know there is supposed to be a special catch all event which can do that, but that’s only for the case the event is not used in the entire FSM, which makes it pretty useless. Thinking of rolling my own based on std::variant now.


In the context of hedge funds, I'm assuming compliance.


Yah—Chief Compliance Officer.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: