Skip to main content

Agentic Coding

When does a single prompt stop being enough — and how do you chain prompts without hiding behind a framework?

Start close to the metal. A minimalist chainable API beats a heavy framework until the task proves otherwise.

Start Minimal

  • Create a simple class with a single method for chaining prompts.
  • Allow for context passing and back-referencing to previous prompt results.
  • Use no external libraries beyond the LLM API itself.

When To Chain

Ask before reaching for a chain:

  • Is the task too complex for a single prompt?
  • Do you need to reduce errors and improve stepwise reasoning?
  • Does a later prompt depend on the output of an earlier one?
  • Does the workflow branch based on intermediate results?

Design The Chain

  • Break the task into smaller, focused prompts.
  • Plan how each prompt builds on the result of the previous.
  • Keep state explicit — the chain carries context, the model does not remember.

Implement And Refine

  • Create a list of prompts, each solving one sub-task.
  • Run them sequentially. Pass context and results between steps.
  • Test the full chain. Adjust individual prompts before adjusting the structure.

Scale Up

  • Use the minimalist chain as a building block for multi-agent workflows.
  • Add logic for agent state and tool responses only when you hit a real limit.
  • Keep abstractions tied to your specific use case — resist premature generalization.
  • Review prompts and chains as models improve; assumptions from six months ago are usually wrong.

Context

Questions

At what point does a prompt chain earn the overhead of becoming a framework?

  • Which step in a chain is most likely to drift when the underlying model changes?
  • When does passing explicit context beat trusting the model's memory — and when is the reverse true?
  • What does a chain reveal about the task that a single prompt hides?