Skip to main content

Prompts

A prompt is an intention encoded as input. Precision in = precision out.

The model does not guess your intent — it processes your input. Master the input, own the output.

What Is a Prompt

A prompt is any input that triggers a model response: a question, an instruction, a document, an image, a voice recording, or a combination. The model maps input to output using learned patterns. Your job is to shape the input so the output lands where you intended.

Amplify Your Agency

DomainPrompt pageArchetypeOutcomeTools
AnalysisAnalysisThe RealistInsight and patternPerplexity, Claude, Gemini
StrategyStrategyThe GeneralDirection and planClaude, OpenAI o-series
PredictionsPredictionsThe OracleCalibrated foresightSuperforecaster frameworks
CodingCodingThe EngineerStructure and functionCursor, Copilot, Claude Code
Web DesignWeb DesignThe CraftsmanInterfaces that workv0, Bolt, Lovable
Visual ArtVisual ArtThe ArtistComposition and styleMidjourney, DALL-E, Flux
MusicMusicThe ComposerFeeling with structureSuno, Udio, Stable Audio
VideoVideoThe DirectorMotion as meaningRunway, Sora, Kling
Promo VideoPromo VideoThe AdvertiserConversion in secondsRunway, Kling, Pika, CapCut
GamesGamesThe DesignerLoops that teachUnity AI, Inworld, Scenario
VoiceCommunicationThe TranslatorSpeech and text bridgeWhisper, ElevenLabs, Deepgram
CommunicationCommunicationThe MessengerInfluence and scaleGrammarly, Claude, Jasper
CreativeCreativeThe DreamerNovelty and visionAll of the above
LibraryLibraryThe LibrarianCopy, paste, iterateLLMs

Inspiration:

Types of Prompt

TypeAnswersWhat it sets
Chat promptWhat do I want now?One-turn task or question
System promptWho is processing?Character, frame, constraints
AgentWho runs autonomously?System prompt + model + tools + context
Command promptWhat is being done?Task, input, expected output
SpecificationWhat must be true at the end?Acceptance criteria, constraint architecture

An agent is a system prompt promoted to character — see Agents for how they are designed and evolved.

Input Modalities

What you can give an AI model — the intention side:

ModalityExamplesNotes
TextInstructions, questions, documentsUniversal — every model accepts
CodeFunctions, repos, error tracesSpecialised models apply
ImageDiagrams, UIs, photos, screenshotsVision models
AudioVoice recordings, callsSTT first, then reasoning
VideoClips, screen recordingsEmerging — limited model support
Structured dataJSON, CSV, tablesInject as text or tool call
System contextMemory, tool state, prior outputsContext engineering layer

Output Modalities

What you get back — the outcome side:

ModalityExamplesPrompt page
Text / proseAnalysis, copy, strategy, plansCommunication
CodeFunctions, tests, scripts, configsCoding
ImagesVisual assets, concepts, artVisual Art
Audio / speechNarration, voice agentsCommunication
VideoCreative, promo, explainerVideo
Structured dataJSON, tables, reportsAnalysis
ActionsTool calls, API triggers, file writesAgents

Modality Matrix

Which tool handles each input → output combination:

Input ↓ / Output →TextCodeImageAudioActions
TextClaudeClaude CodeMidjourneyElevenLabsClaude + tools
CodeClaudeCursorClaude Code
ImageClaude (vision)Flux / Ideogram
AudioWhisper → ClaudeCartesiaVAPI
Structured dataClaudeClaude CodeClaude + tools

See the full modality reference for model-level detail.

Techniques

TechniqueWhat it doesBest for
Zero-shotDirect ask, no examplesClear tasks with known output format
Few-shotProvide 2–3 input/output examplesPattern replication
Chain-of-thought"Think step by step" prefixMulti-step reasoning
Role / personaDeclare who is processingConsistent frame across tasks
System promptDefine character + constraintsAutonomous or repeated tasks
Negative promptingState what NOT to produceImage generation, creative control
SpecificationFull intent contract with acceptance criteriaAgent delegation, complex tasks
Iterative refinementBuild through feedback loopsDrafting and editing
Context stuffingLoad relevant documents as contextLong-horizon or domain-specific tasks
DecompositionBreak complex tasks into sub-tasksMulti-step production pipelines

Prompt Disciplines

As models move from chat to autonomous agents, prompting fractures into four disciplines:

DisciplineWhat it isGoal
Prompt CraftClear instructions, guardrails, examplesReliable single-turn responses
Context EngineeringCurating the optimal token set (tools, docs, memory)Comprehensive information for autonomous tasks
Intent EngineeringEncoding purpose and decision boundariesAligning agents with strategy
Specification EngineeringAgent-fungible documents agents execute againstEmbedded oversight without human intervention

Specification Primitives

For agent-grade specifications, five elements are required:

  1. Self-contained problem statement
  2. Acceptance criteria
  3. Constraint architecture
  4. Decomposition
  5. Evaluation design

Tool Selection

Which tool for which job — first choice is default, second is fallback:

ModalityJTBD1st Choice2nd ChoicePrompt page
Text — ReasoningThink through hard problemsClaudeOpenAI o-seriesAnalysis
Text — ResearchFind and synthesise answersPerplexityGemini Deep ResearchStrategy
Text — WritingDraft, edit, persuadeClaudeWriter.comCommunication
CodeBuild and ship softwareClaude CodeCursorCoding
Web/UIDesign interfaces from promptsv0Bolt / LovableWeb Design
ImageGenerate visual assetsMidjourneyFlux / IdeogramVisual Art
Video — CreativeMotion as meaningRunwayKling / SoraVideo
Video — PromoConversion in secondsArcadsReel Farm / CapCutPromo Video
MusicFeeling with structureSunoUdioMusic
Voice — TTSText to natural speechElevenLabsCartesiaCommunication
Voice — STTTranscribe speech to textWhisperDeepgramCommunication
Voice — AgentsConversational AI on phoneVAPIBland AICommunication
3DGenerate 3D assets from text/imageTripoMeshy
GamesInteractive loops that teachUnity AIInworldGames

First Principles

Five rules that apply across every modality — text, voice, image, music, video, code:

PrincipleWhat it doesExample
ContextGround the model in your world"You are a senior infrastructure engineer reviewing a Terraform plan"
ConstraintNarrow the output space"Under 200 words, bullet points only, no speculation"
ExampleShow, don't describe"Input: X → Output: Y. Now do Z."
IterationRefine through feedback loops"That's close. Now make it more concise and remove the passive voice"
StructureShape the response format"Use XML tags: <analysis>, <recommendation>, <risk>"

Chain-of-thought = Structure + Iteration. Persona prompting = Context + Constraint. Negative prompting = Constraint applied to pixels.

Context

Questions

  • If input precision determines output quality, which input modality are you least precise with — and what does that cost per run?
  • Zero-shot gets speed; few-shot gets accuracy; specification gets delegation. Which step are you stuck at?
  • The modality matrix shows what each tool handles — which cell in your workflow still has no tool assigned?
  • When does a system prompt become an agent — and where is that line in your current setup?