Yeah, that seems bad. The whole point of the diagram was to explain something better than prose could, and now it is lost. I'm thinking the case were someone can make out the shapes/arrows/colors well, but not the text.
> Llm to text to diagram is the killer workflow these days.
It's useful, but I think only marginally. When using a diagrams-as-code tool, having an LLM write it for you gets you up and running faster, but long term you have more control (and no hallucinations) just writing the code yourself. It'll probably be fewer keystrokes in the end. Doubly so if your diagrams-as-code tool has decent autocomplete.
i think d2 looks sick , and i'm not op, but i do love just describing my system and telling the LLM to draw it. so having it "draw" d2 seems like a great new step?
when you want to make changes, you update your docs/design and repeat. think system design interview but you skip drawing it yourself at all and then sanity check your own work by interpreting the resulting graph etc
Thanks for this feedback. I just tried with a trackpad, and was surprised to see pinch gestures were triggering "wheel" (i.e. mouse wheel) events. This wasn't intended behavior, and should now be fixed. I'll have a think on how native gestures like these could be handled better.
> I think the persistent advice to authors to "do social media" is, at best, part of a strategy that can be generously described as throwing everything at the proverbial wall in the hope that something, anything, will stick.
It also probably was pretty good advice until ~3 years ago.
There are quite a few tools that offer this model-based approach; you define your resources in a model, then use them in multiple perspectives to show different aspects like you describe. Some, like Ilograph[0] (my project), offer interactivity and zooming.
Nobody was stopping anyone from making compilers that introduced random different behavior every time you ran them. I think it's telling this didn't catch on.
There were definitely compilers that used things like data-structures with an unstable iteration order resulting in non-determinism, and people went stopping other people from doing that. This behavior would result in non-deterministic performance everywhere, and combined with race conditions or just undefined behavior other random non-deterministic behaviors too.
At least in part this was achieved with techniques that can be used to make LLMs to, like by seeding RNGs in hash tables deterministically. LLMs are in that sense no less deterministic than iterating over a hash table (they are just a bunch of matrix multiplications with a sampling procedure at the end, after all).
Yep, that’s who we use. It is half the cost of VZW’s cheapest plan with more data and better QCI. The latter aspect turns out mattering a lot more than I thought.
reply