Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone have a convenience solution for doing multi-step workflows? For example, I'm filling out the basics of an NPC character sheet on my game prep. I'm using a certain rule system, give the enemy certain tactics, certain stats, certain types of weapons, right now I have a 'god prompt' trying to walk the LLM through creating the basic character sheet, but the responses get squeezed down into what one or two prompt responses can be.

If I can do node-red or a function chain for prompts and outputs, that would be sweet.



For me, a very simple "breakdown tasks into a queue and store in a DB" solution has help tremendously with most requests.

Instead of trying to do everything into a single chat or chain, add steps to ask the LLM to break down the next tasks, with context, and store that into SQLite or something. Then start new chats/chains on each of those tasks.

Then just loop them back into LLM.

I find that long chats or chains just confuse most models and we start seeing gibberish.

Right now I'm favoring something like:

"We're going to do task {task}. The current situation and context is {context}.

Break down what individual steps we need to perform to achieve {goal} and output these steps with their necessary context as {standard_task_json}. If the output is already enough to satisfy {goal}, just output the result as text."

I find that leaving everything to LLM in a sequence is not as effective as using LLM to break things down and having a DB and code logic to support the development of more complex outcomes.


Indeed! If I'm met with several misunderstandings in a row, asking it to explain what I'm trying to do is a pretty surefire way to move forward.

Also mentioning what to "forget" or not focus on anymore seems to remove some noise from the responses if they are large.


One option for doing this is to incrementally build up the "document" using isolated prompts for each section. I say document because I am not exactly sure what the character sheet looks like, but I am assuming it can be constructed one section at a time. You create a prompt to create the first section. Then, you create a second prompt that gives the agent your existing document and prompts it to create the next section. You continue until all the sections are finished. In some cases this works better than doing a single conversation.


Perplexity recently released something like this https://www.perplexity.ai/hub/blog/perplexity-pages


Perhaps this would be of use? https://github.com/langgenius/dify/ I use it for quick workflows and it's pretty intuitive.


Sounds like you need an agent system, some libs are mentioned here: https://lilianweng.github.io/posts/2023-06-23-agent/


You can do multi shot workflows pretty easy, I like to have the model produce markdown, then add code blocks (```json/yaml```) to extract the interim results. You can lay out multiple "phases" in your prompt and have it perform each one in turn, and have each one reference prior phases. Then at the end you just pull out the code blocks for each phase and you have your structured result.


Did you force it into a parser? You can define a simple language in llama.cpp for the LLM to obey.


I still haven’t played with using one LLM to oversee another.

“You are in charge of game prep and must work with an LLM over many prompts to…”




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: