Skip to main content

Prompting

What happens when the quality of your questions determines the quality of your workforce?

Questioning is a human capability. Prompting is questioning applied to intelligence systems — biological, phygital, or both. The same principles apply: clarity of intention, structured context, feedback that compounds.

Capability, Not Skill

FrameImplication
Prompting as skillLearn the syntax, use the tool, move on
Prompting as capabilityShape how intelligence flows through every system you touch

Skills get automated. Capabilities direct automation. The person who can prompt well isn't "using a tool" — they're conducting an orchestra of intelligence across modalities.

The Interface Stack

Prompting is the interface layer between intention and execution. It operates across every modality:

ModalityWhat You PromptWhat Returns
TextQuestions, context, constraintsReasoning, analysis, code
ImageLayout, palette, style, negativesVisual composition
VideoBeat sheet, sequence, motionMoving narrative
VoiceTone, persona, pacingSpoken presence
CodeIntent, architecture, testsWorking software
AgentsGoals, tools, guardrailsAutonomous action

Same capability. Different outputs. The skill is modality-specific syntax. The capability is structured communication of intent.

Prompt Architecture

Every effective prompt — text, visual, agent — has the same skeleton:

CONTEXT   → What does the system need to know?
INTENT → What outcome do you want?
CONSTRAINT → What must NOT happen?
FORMAT → What shape should the output take?
FEEDBACK → How will you measure and iterate?
MissingWhat Fails
ContextSystem guesses your situation wrong
IntentOutput is technically correct but useless
ConstraintUnwanted elements, wrong tone, scope creep
FormatRight content in wrong shape
FeedbackFirst attempt becomes final attempt

Weak vs Strong

WeakStrongWhat Changed
"Write me a marketing email""Write a 150-word email to SaaS CTOs who just hit 50 employees. Tone: direct, no fluff. CTA: book a demo. Avoid: buzzwords, 'leverage', 'synergy'."Context, constraint, format
"Make a nice diagram""Information design poster, 16:9, dark background #111827. Three-column layout. Left: five stacked bars. Centre: ten nodes in a clockwise ring. Right: single block with label."Spatial specificity
"Build me an agent that does research""Agent with web search + file read tools. Goal: find the 3 most-cited papers on X from 2024. Output: markdown table with title, authors, citation count, one-sentence finding. Stop after 3."Tools, goal, format, guardrail

The strong versions aren't longer for the sake of length. They're specific where vagueness would cost you iterations.

The Feedback Loop

Prompting is not one-shot. It's a control system.

PROMPT → OUTPUT → EVALUATE → ADJUST → PROMPT
↑ |
└────────── each cycle sharpens ───────┘
CycleWhat Improves
1You discover what the model misunderstood
2You add the constraint that was missing
3You refine format to match actual need
4+You have a reusable prompt template

Most people stop at cycle 1. The capability compounds at cycle 3+.

By Modality

Each modality has specific technique. The capability transfers; the syntax doesn't.

ModalityTechnique PageKey Lesson
TextPromptingStructure beats length. XML tags, role anchoring, chain-of-thought.
VisualVisual PromptingDescribe WHERE things go, not just WHAT they are. Negative prompts.
CodeAI CodingIntent + architecture + test expectations.
AgentsAgent FrameworksGoals + tools + guardrails + stop conditions.

Practice Protocol

  1. Prompt, then diagnose — When output is wrong, find which skeleton element was missing
  2. Save your best prompts — Build a library. Templates emerge from repeated wins.
  3. Vary one thing — Change context OR constraint OR format. Never all at once.
  4. Read the model's misunderstanding — The error tells you what your prompt actually said vs what you meant
  5. Cross-modality practice — Prompt text, then image, then video. The capability transfers.

The Shadow

Over-prompting. Writing 2000-word system prompts when 50 words would do. Prompt engineering as procrastination — perfecting the instruction instead of shipping the output. Treating models as fragile when they're robust to reasonable variation.

Compound Effect

Prompting +Creates
QuestioningBetter diagnostic prompts — finding root causes faster
WritingClearer instructions — fewer iterations to usable output
VisualisationSpatial prompts that produce what you pictured
Systems ThinkingMulti-step agent prompts that handle complexity
TasteKnowing when the output is good enough vs needs another pass

Context

  • Prompts — Text prompting technique, provider guides, copy-paste templates
  • Visual Prompting — Image and video prompt structure
  • Mantra — When prompts become internal triggers
  • Questioning — The human skill underneath
  • AI Coding — Prompting applied to software
  • LLM Models — Which models respond to what