Skip to main content

Naming Standards

How do you make a system's concepts navigable by both humans and machines?

Without consistency, improvement is guesswork.

Three Layers

Three layers, each with a different job. Most teams build one and wonder why the other two break.

LayerWhat It IsJob To Be Done
TaxonomyStructured classification — domains, subdomains, artifact typesShared mental model of where things live. Navigation, discovery, grouping
NomenclatureRule-based naming — formats, casing, patterns, uniquenessClear, unique, predictable labels. Parseable by humans and machines
OntologyFormal model of concepts and relationshipsAgents and tools reason over the domain. Bridges between taxonomies

Taxonomy gives structure (what concepts exist and how they're grouped). Nomenclature gives labels (rules for naming instances within that structure). Ontology gives semantics (relations so systems can reason about the structure).

WithoutSymptom
TaxonomySame concept lives in three places, nobody finds anything
NomenclatureEvery file named differently, search is guesswork
OntologyAgents can't reason, every integration is bespoke

Taxonomy

Our canonical set of domains, subdomains, and artifact types.

Operations Hierarchy

Four levels from strategic to tactical:

STANDARD (Why) → PROCESS (What) → WORKFLOW (How) → CHECKLIST (Verify)
TermQuestionScopeChangesExample
StandardWhy does this matter?Organization-wide rulesRarelyBrand Guidelines, Security Policy
ProcessWhat needs to happen?End-to-end outcomePeriodicallyMarketing, Onboarding, Sales
WorkflowHow do we do this activity?Step-by-step executionRegularlyArticle Copywriting, Lead Qualification
ChecklistDid we do it right?VerificationAs neededQuality Gate, Pre-publish Check
STANDARD: Marketing Principles
└── PROCESS: Content Marketing
└── WORKFLOW: Article Copywriting
└── CHECKLIST: Final Quality Gate

Disambiguation:

Use ThisNot ThisReason
StandardPolicy, Guideline, RuleImplies measurable compliance
ProcessSystem, FunctionImplies flow from trigger to outcome
WorkflowSOP, Procedure, PlaybookTool-agnostic and clear
ChecklistTask list, To-doImplies verification, not work

Special terms:

TermDefinitionWhen to Use
ProtocolSequenced principles into a repeatable method under defined conditionsHealth protocols, agent protocols, governance rules
PlaybookCollection of related workflows"Everything you need to run [function]"
PracticeInformal pattern not yet documentedCapture as workflow when proven valuable
Work ChartCapability-to-demand mappingWho does what (separate from how)

Protocol covers three domains:

DomainScopeExample
CoordinationBetween systems or agentsA2A, TCP/IP, MCP
OperationalWithin a domain, for practitionersSleep protocol, sales discovery, movement health
GovernanceFor collective decisionsPeer review, DAO voting, futarchy

All are sequenced principles. They differ in who coordinates.

Protocols sit at the third layer of the Knowledge Stack — where principles become methods. When a protocol is proven and adopted organization-wide, it becomes a standard.

Full definitions: Process Optimisation

Content Structure

DirectoryPurposeContent Type
/docs/Published documentationEvergreen reference, frameworks, guides
/meta/Published insightsTime-stamped articles, predictions, reflections
/.agent-work/Working filesDrafts, research, planning (not version controlled)
/.ai/Agent configurationSkills, commands, context
TypeLocationPurposeFrontmatter
Index*/index.mdCategory overview, navigationsidebar_position: 1
Guide/docs/*/How-to, referenceStandard tags
Workflow/docs/crypto/business/*/Step-by-step processActivity tags
Article/meta/YYYY-MM-DD-*.mdDated insight (Star)Date, authors, tags
Profile/docs/intention/mastermind/Person/entity analysisProfile tags

FACT-STAR Architecture: Keep verbose narratives (Stars) separate from pure data/definitions (Facts). Use [domain]/ directories to store tightly grouped atomic markdown facts, and use /meta/ articles as narrative stars linking back to those facts to keep agent context clean.

5P Structure

Every domain follows the same folder taxonomy:

[domain]/
├── index.md
├── [domain]-principles/ # What truths guide this domain?
├── [domain]-platform/ # What tools/infrastructure?
├── [domain]-protocols/ # What standardized processes?
├── [domain]-performance/ # How do we measure?
└── [domain]-players/ # Who does the work?

Applied fractally: Marketing gets five subfolders. Space gets five subfolders. Countries get scored on five dimensions. Same meta, different matter.

AI Taxonomy

Three distinct layers. Conflating them is the most common source of confusion in AI tooling discussions.

LayerDefinitionJobExamples
Agentic frameworkDeveloper-facing environment + orchestration primitives — where you design, configure, and coordinate agentsDesign and run agent behaviorsClaude Code, Gemini CLI, Cursor
Agent engineAutonomous execution runtime implementing the perception→decide→act loop — stateless or persistentBe the agent in productionClawbot, Hermes, ElizaOS, CrewAI
Agent protocolCoordination standard between engines, frameworks, and external toolsWire things togetherMCP, A2A, Verifiable Intent

The mental model:

Agentic framework: "where I design and coordinate agents"
Agent engine: "what runs as the agent in production"
Agent protocol: "how agents and tools talk to each other"

On lifespan: Agent engine covers both stateless (per-task runners) and persistent (long-running daemons). Lifespan is a deployment characteristic, not a definitional one. Sub-distinguish with stateless runner vs persistent agent when lifespan matters — not with a new category.

Disambiguation:

Use ThisNot ThisReason
Agentic framework"AI tool", "AI assistant", "coding agent"Those conflate orchestration with execution
Agent engine"agentic framework" for Clawbot/HermesEngines run the loop; frameworks design it
Agent protocol"tool", "plugin", "integration"Protocol implies a coordination standard, not a feature

Canonical mapping in this repo: docs/ai/tools/agentic-frameworks/ → agentic frameworks. docs/ai/tools/agent-engines/ → agent engines. docs/software/protocols/agent-protocols/ → agent protocols.


Nomenclature

How we name concrete artifacts within the taxonomy — unambiguous and machine-friendly.

Files & Folders

Folders: kebab-case

✅ docs/business/growth/marketing/
✅ docs/systems/process-optimisation/

❌ docs/Business/Growth/Marketing/ (no capitals)
❌ docs/business/growth/Marketing/ (inconsistent)
❌ docs/business_growth_marketing/ (no underscores)

Files: kebab-case.md or kebab-case.mdx

✅ marketing-article-copywriting.md
✅ process-optimisation.md

❌ MarketingArticleCopywriting.md (no PascalCase)
❌ marketing_article_copywriting.md (no underscores)
❌ marketing article copywriting.md (no spaces)

Index files: index.md or index.mdx for category pages

Blueprint Artifacts

Files in docs/crypto/business/instruments/ follow a three-part convention:

Pattern: blueprint-{businessFunction}-{artifactName}.md

✅ blueprint-finance-cash-flow-projection.md
✅ blueprint-marketing-go-to-market-strategy.md
✅ blueprint-strategy-pitch-deck.md
✅ blueprint-product-prd-spec.md

❌ template-go-to-market-strategy.md (type prefix, no function ownership)
❌ finance-roi-analysis.md (function prefix, missing blueprint prefix)
❌ blueprint-finance.md (missing artifact name)

blueprint — directory-scoped type prefix. All files here are blueprint artifacts. Enables ls blueprint-* to list the full library.

{businessFunction} — the process domain that owns and signs off on the artifact. Derived from APQC Process Classification Framework — horizontal process, not vertical org unit. Controlled vocabulary:

ValueProcess DomainOwns
strategyDevelop and manage business strategyBusiness plans, venture pitches, portfolio reviews, opportunity scanners
financeManage financial resourcesCash flow, ROI, unit economics, cap table, valuation, investor materials
marketingDevelop and manage customer relationshipsBrand, positioning, GTM, lead magnets, ICP, art direction
salesSell products and servicesRFP responses, proposals, account plans, pricing sheets
productDevelop and manage products/servicesPRDs, business idea validation, AI priorities
engineeringManage information technologyTech specs, architecture, API docs, system design
operationsManage business processesScorecards, process docs, quality standards
hrDevelop and manage human capitalJob descriptions, handbooks, onboarding, compensation
legalManage legal, compliance, and riskContracts, NDAs, privacy policy, compliance checklists
dataManage knowledge, improvement, and analyticsDashboards, KPI frameworks, data governance, analytics audits

{artifactName} — kebab-case description of what the artifact does, not what format it is. ISO 15489 principle: name by function, not container. No type words (template, doc, spreadsheet).

Search prefix works:

ls blueprint-finance-* # all finance artifacts
ls blueprint-marketing-* # all marketing artifacts

Discovery Prompts

Files in .invisible/prompts/{domain}/ follow a three-part convention. Each filename encodes the business function, descriptor, and version — sortable by function with a prefix search.

Pattern: {businessFunction}-{descriptor}-v{major}.{minor}.md

✅ strategy-company-deep-dive-v1.0.md
✅ finance-ma-deal-intelligence-v1.0.md
✅ marketing-brand-sentiment-analysis-v1.0.md
✅ hr-employer-intelligence-v1.0.md

❌ mckinsey-company-deep-dive-v1.0.md (firm name, not business function)
❌ strategy-company-deep-dive.md (missing version suffix)
❌ strategy_company_deepdive_v1.md (underscores + camelCase fragment)
❌ prompt-strategy-company-deep-dive.md (unnecessary type prefix)

{businessFunction} — controlled vocabulary from APQC (same as blueprint artifacts). Describes the process domain that owns or benefits from the prompt output:

ValueProcess DomainOwns
strategyBusiness strategy + intelligenceCompany profiles, competitive war rooms, case studies, synthesis
financeFinancial resourcesFunding trackers, M&A deal intelligence
marketingCustomer relationshipsBrand sentiment, positioning, perception
salesSell products and servicesProspect research, account intelligence
productProducts and servicesTechnology trend analysis, adoption curves
hrHuman capitalEmployer intelligence, talent mapping
dataAnalytics and knowledgeMarket statistics, data compilation

{descriptor} — kebab-case 2-4 word description of what the prompt produces. Name by output, not by the firm persona used inside.

v{major}.{minor} — semantic version suffix:

BumpWhenExample
minorWording improvements, additional bullets, clarified parametersv1.0v1.1
majorIntent changes, restructured output format, new primary goalv1.1v2.0

Initial extraction from any source = v1.0. Do not start at v0.x.

Required YAML frontmatter:

---
title: Human-readable prompt name
source: https://... (original URL or "internal")
business_function: strategy
model_persona: McKinsey Senior Associate
version: 1.0
---

Search prefix works:

ls strategy-* # all strategy prompts
ls finance-* # all finance prompts
grep -l "v2\." .invisible/prompts/**/*.md # prompts on second major version

Workflow Tasks

Files in .invisible/workcharts/{blueprint}/workflow-tasks/ follow a four-part convention. Each task name encodes its phase, step, action verb, and object — so the filename alone tells you which meta-phase it belongs to, what it does, and what it produces.

Pattern: {numberedPhase}-{step}-{verb}-{object}.json

✅ 2-2-classify-constraints.json (phase 2, step 2, verb: classify, object: constraints)
✅ 3-2-architect-redesigns.json (phase 3, step 2, verb: architect, object: redesigns)
✅ 6-3-write-jsx.json (phase 6, step 3, verb: write, object: jsx)
✅ 0-0-internalize-input.json (phase 0, step 0, verb: internalize, object: input)

❌ 2-2-real-vs-artifact.json (no verb — what action is being taken?)
❌ phase2-classify-constraints.json (redundant 'phase' prefix)
❌ classify-2-2-constraints.json (verb before phase number — breaks sort order)

Three meta-phases group the numbered phases:

META-PHASE 1 (Raw) → numbered phases 0, 1, 2 → "What is true?"
META-PHASE 2 (Staging) → numbered phases 3, 4, 5 → "What is the case?"
META-PHASE 3 (Presentation)→ numbered phases 6, 7 → "What is the deliverable?"

Verb vocabulary by meta-phase (controlled list):

Meta-phaseDriving questionApproved verbs
1 — RawWhat is true?extract, gather, classify, audit, rank, verify, normalize, internalize, archive, send, run, complete
2 — StagingWhat is the case?define, architect, model, sequence, validate, map, baseline, assemble
3 — PresentationWhat is the deliverable?generate, create, write, gate, pitch, ship, build, chart

Verb selection rule:

  • A task verb must belong to its meta-phase's vocabulary
  • A wrong-phase verb signals the task may be in the wrong phase — investigate before adding new verbs
  • New verbs require explicit justification recorded in the meta-phase's docstring (forces deliberate growth, not vocabulary drift)

Required JSON frontmatter:

{
"id": "2.2",
"meta_phase": "raw",
"numbered_phase": 2,
"step": 2,
"verb": "classify",
"object": "constraints",
"skill": ".agents/skills/constraint-mapping/SKILL.md",
"skill_version": "1.0.0",
"tool": null,
"tool_version": null,
"default_executor": "strategy-lead",
"executor_alternatives": ["research-lead"],
"validated_in_experiments": ["EXP-026"],
"evolved_by_experiments": [],
"reads_from_flavour_profile": ["named_workflows", "voice_signals"]
}

Folder structure (optional but recommended for blueprints with ≥30 tasks):

workflow-tasks/
├── 1-raw/
│ ├── 0-0-internalize-input.json
│ ├── 0-1-personal-ai-plan.json
│ ├── 2-2-classify-constraints.json
│ └── ...
├── 2-staging/
│ ├── 3-1-define-function-outputs.json
│ ├── 4-3-sequence-compounding-value.json
│ └── ...
└── 3-presentation/
├── 6-3-write-jsx.json
├── 7-1-pitch-deck.json
└── ...

For blueprints with under 15 tasks, flat structure is fine.

Capability binding (why this matters):

The recipe describes the capability required (skill + tool + their versions). The agent named in default_executor is swappable — any agent capable of wielding the named skill is a valid runner. The recipe survives agent renames. When agent rosters reorganise (as happened during agent-bloat reduction in 2026-04), the workflow tasks don't break — only the default_executor field updates.

Experiment binding (versioning audit trail):

validated_in_experiments and evolved_by_experiments link each task to entries in .invisible/experiments/exp-NNN-*.md. When an experiment's Conclusion section produces a change to a task — better tool, sharper skill version, refined prompt — the change is committed with the experiment ID in the workflow-task frontmatter. This makes blueprint evolution auditable: every change has a receipt and an experiment behind it.

Connection to spine.json:

Each blueprint's spine.json declares the three meta-phases at the top level (with their driving questions and verb vocabularies) and tags every numbered phase with a meta_phase field. Workflow-task names + spine.json grouping must agree. Both layers are required.

Production Narratives

Files under .invisible/engineering/production/ follow a four-part convention. Each filename encodes the narrative type, routing channel, production-thread ID, and slug — so any agent can scan a folder and read both the stage (folder) and the purpose (filename) at a glance.

Pattern: {type}-{channel}-P-NNN-{slug}.md

✅ evidence-cli-P-001-production-flow.md
✅ story-prod-P-014-rfp-intake.md
✅ map-prod-P-014-rfp-intake.md
✅ commission-prod-P-014-rfp-intake.md
✅ retro-flow-P-022-ai-pack-cycle.md

❌ vvs-rfp-intake.md (no type/channel/ID — old format)
❌ FD-027-control-system.md (no type, FD prefix retired)
❌ story_prod_P-014_rfp.md (underscores)
❌ STORY-PROD-P014-rfp-intake.md (capitals, missing dash)

{type} — narrative purpose. Six values aligned to the canonical dev pipeline at /docs/software/development/:

TypeMaps to pipeline stageDoc termJob To Be Done
evidencePAIN + DEMAND"Evidence card"Capture observed friction + apply Dream Filter (value? position? 2nd/3rd-order effects?)
storySPEC + RANK"Story Contract"5P scored bet with hard gates + kill criteria + WHEN/THEN/FORBIDDEN/OUTCOME contract
mapMAPS + TYPES"Outcome Map"Hex placement + domain types + DoR 02a–05a + SPEC-MAP rows + Phase Sequence
commissionCOMMISSION"L0–L4 evidence"Independent verification — qa-commissioner runs live capability against story
retropost-COMMISSION (Loop 2)"Retrospective"Predicted vs actual delta + lesson codified into blueprint
experimentLoop 2/3 evolution"Experiment"Hypothesis test on a blueprint change — links to spine.json + workflow-task version

{channel} — routes to the eng-lead who owns the channel. Controlled vocabulary:

CodeChannelEng-lead
clisystems-clieng-systems-lead
prodproduct-mgteng-product-lead
chainblockchaineng-blockchain-lead
mktmarketing-siteseng-marketing-lead
flowbusiness-transform-journeysshared (cross-functional)

P-NNN — production thread ID. One ID per demand. Persists across all artifact types from evidence through retro. Sequential, zero-padded to three digits (P-001, P-042). Reset on 2026-05-11 — FD-NNN retired.

{slug} — kebab-case 2–4 words, ≤20 chars total. Verb-object preferred (production-flow, rfp-intake, ai-pack-cycle). Same slug across paired artifacts of the same P-NNN.

Folder = Stage on the Assembly Line

Folders under .invisible/engineering/production/ encode progress along the eight-station inside-out build line:

00-frame/ ← Dream Filter applied (Station -1)
01-bet/ ← Story scored, hard gates being filled (Station 0)
02-plan/ ← Outcome map drafted, eng-lead review (Station 1)
03-ready/ ← All 11 DoR gates green, broadcast complete (Station 1 → 2 handoff)
04-build/ ← Eng active — ALGO|CLI|API|UI tracked in plan-cli (Stations 2–5)
05-commission/ ← eng_done:true, qa-commissioner running L0–L4 (Station 6)
06-done/ ← bus_done:true (Station 6 complete)
07-compound/ ← Retro filed + lesson codified or experiment opened (Station 7)

A single P-NNN thread leaves a trail of artifacts as it moves from 00-frame/ to 07-compound/. The whole tree is find production/ -name '*P-NNN*'.

Search prefix works:

ls production/01-bet/ # all stories at SPEC stage
ls production/01-bet/story-cli-* # all CLI bets
find production/ -name 'story-prod-*' # all product-mgt stories across stages
find production/ -name '*P-014*' | sort # full thread for P-014, ordered by station
ls production/07-compound/retro-* # all retros

Required YAML frontmatter:

---
production_id: P-014
type: story # evidence | story | map | commission | retro | experiment
channel: prod # cli | prod | chain | mkt | flow
slug: rfp-intake
title: RFP Intake (Story Contract)
related:
- production/00-frame/evidence-prod-P-014-rfp-intake.md
- production/02-plan/map-prod-P-014-rfp-intake.md
created: 2026-05-11
last_advance: 2026-05-11
station: 01-bet # current folder; updated on each move
---

Pairing rule: every story MUST have a matching evidence upstream and SHOULD have a matching map (drafted when story passes 5P ≥ 3.0 + 5 hard gates). Every commission MUST link to its story and map. Every retro MUST link to all upstream artifacts of its P-NNN.

React Components

Files: PascalCase.jsx inside a kebab-case/ domain folder under src/components/.

✅ src/components/content/StoryCard.jsx
✅ src/components/venture/ScoreCard.jsx
✅ src/components/design-system/Button.jsx

❌ src/components/content/story-card.jsx (kebab-case file)
❌ src/components/StoryCard/index.jsx (folder per component)
❌ src/pages/.../vv-stories/index.jsx → function Story(...) { } (inline primitive)

Barrel exports: every component folder has an index.js that re-exports public components. Pages import from the barrel, not the file.

✅ import { StoryCard } from "@site/src/components/content";
❌ import { StoryCard } from "@site/src/components/content/StoryCard";
❌ import { StoryCard } from "../../../../../components/content/StoryCard";

Consolidation rule: any component pattern used on two or more pages lives in src/components/{domain}/. Pages compose primitives. Pages never define them. Full rules in docs/software/products/design/design-language.md.

✅ docs/systems/index.md
✅ docs/business/growth/marketing/index.md

❌ docs/systems/systems.md (use index.md)
❌ docs/systems/README.md (use index.md)

Dated content: YYYY-MM-DD-slug.md

✅ meta/2024-12-14-the-amplifying-wave.md
✅ meta/2024-12-10-naming-matters.md

❌ meta/the-amplifying-wave.md (missing date)
❌ meta/14-12-2024-the-wave.md (wrong date format)

Frontmatter

Required fields:

---
title: Page Title
sidebar_label: Short Label
tags:
- PrimaryCategory
- SecondaryCategory
relations:
governed_by: []
depends_on: []
decomposes_into: []
---

Optional fields:

description: SEO description (150-160 chars)
sidebar_position: 1
authors: [matt]
date: 2024-12-14

Tags

  • Use existing tags before creating new ones
  • Capitalize first letter: Marketing not marketing
  • Singular form: Workflow not Workflows
  • No spaces: BOaaS not BOa aS

Core tags: Systems, Process, Workflow, Standards, BOaaS, Marketing, Platform, AI, Crypto, DePIN

Agent Skills

For .agents/skills/ directories (AAIF standard). All names are kebab-case.

Two naming patterns:

PatternWhenExamples
{cluster}-{job}Skill belongs to a named pipeline or systemrfp-intake, ait-1.1-company-brief, stackmates-comms-post
{verb}-{object}Standalone cross-cluster actioncreate-prd, fix-links, sell-the-dream, validate-demand

Mode suffix convention — when a cluster has distinct phases, the suffix names the mode:

SuffixMeaningExample
(none)Strategy / full executionseo — strategy and optimization
-auditDiagnostic only, no buildsseo-audit — diagnose, no changes
-loopAutomated repeating cycledesign-critique-loop — render/measure/fix cycle

Cluster vocabulary (controlled list):

Cluster prefixCapability groupExamples
rfp-Sales pipeline (intake → qualify → discover)rfp-intake, rfp-qualify, rfp-discover
ait-AI transformation client work artifactsait-1.1-company-brief, ait-1.2-competitive-landscape
stackmates-CLI / engineering ops wrappersstackmates-comms-post, stackmates-plan-advance, stackmates-what-next
sui-Sui ecosystem smart contract + GTMsui-move-patterns, sui-walrus-patterns, sui-ecosystem-scan
seo-Search visibility (audit mode)seo-audit

Standalone skills (no cluster prefix): create-prd, fix-links, deep-research, sell-the-dream, validate-demand, score-prds.

Dedup rule — check before creating any new skill:

  1. Search active table in .agents/skills/CLAUDE.md for trigger phrase overlap:
    grep -r 'Activates for' .agents/skills/*/SKILL.md | grep -i "keyword"
  2. If two skills share the same cluster AND their description: trigger phrases overlap → they are the same skill. Merge; keep the more descriptive name.
  3. If a new skill fits an existing cluster prefix, it must use that prefix.
✅ rfp-intake, rfp-qualify, rfp-discover (cluster prefix, clear pipeline sequence)
✅ create-prd, fix-links, sell-the-dream (verb-object, standalone)
✅ seo, seo-audit (same cluster, distinct modes)

❌ voice-hemingway AND hemingway-style (same triggers — merge, keep hemingway-style)
❌ industry-analysis AND deep-research (audit trigger overlap — confirm scope boundary)
❌ designing-system (gerund form — use noun: design-system)
❌ designSystem (camelCase — kebab only)
❌ ds (too abbreviated)

Utility Files (Templates)

Blank stencils kept in .invisible/templates/ that agents copy when starting a new artifact.

Pattern: _{slug}-template.md — underscore prefix + -template suffix.

✅ _prediction-template.md (underscore prefix sorts to top; -template suffix names intent)
✅ _transformation-plan-template.md
✅ _social-package-template.md

❌ template-prediction.md (type prefix — names format, not function)
❌ prediction-stencil.md (non-standard suffix; -template is the project convention)
❌ prediction-template.md (missing underscore — won't sort above content files)

Why underscore prefix? Sorts to top of the directory listing, marks the file as a utility (not content), and follows the existing _qa-template.md pattern already used in _analysis/.

Why -template suffix, not -stencil? Three existing utility files in the codebase use -template. One (transformation-plan-stencil.md) used -stencil — renamed for consistency.

Folder vs content: Template files live in .invisible/templates/. They are NEVER committed as named output — always copy and rename before filling.

Word List

UseInstead OfReason
WorkflowSOP, Procedure, Playbook (for single activity)Tool-agnostic, clear
StandardPolicy, Guideline, RuleImplies measurable
ProcessSystem (for work documentation)Implies flow
ChecklistTask list, To-do listImplies verification
Work ChartOrg Chart, RACICapability-focused

Discouraged terms:

AvoidUse InsteadReason
SOPWorkflowIndustry jargon
ProcedureWorkflowAmbiguous
Policy (for work docs)Standard"Policy" implies HR/legal
GuidelineStandard or WorkflowToo weak
Best PracticeStandard or WorkflowVague, undocumented
Protocol (for daily tasks)WorkflowProtocol implies a proven method with sequenced principles; workflow implies step-by-step execution of a single activity

AI writing terms:

AvoidUse InsteadReason
"dive into""explore", "examine"Cliche
"unveil""show", "reveal"Cliche
"realm""area", "domain"Cliche
"leverage" (as verb)"use"Jargon
"utilize""use"Unnecessary complexity
"in order to""to"Wordy

Search Patterns

Design names so prefix search works:

"marketing-" → All marketing activities
"workflow-" → All workflow documents
"researching-" → All research skills
"/docs/crypto/business/" → All business documentation

Names should be unique enough to grep:

# Find all workflow documents
grep -r "Workflow" docs/ --include="*.md"

# Find all marketing protocols
ls docs/business/growth/marketing/marketing-processes/

Ontology

Formal relationships so humans and AI agents can operate coherently on the system.

Today this is implicit — encoded in directory structure, link patterns, and page conventions. The path forward is making it explicit and queryable.

RelationshipExampleEncoded By
domain HAS subdomainBusiness HAS MarketingDirectory nesting
standard GOVERNS processNaming Standards GOVERNS file creationThis page
protocol ENABLES interoperabilityA2A ENABLES agent coordinationProtocols
PRD DECOMPOSES_INTO capabilitiesSales Dev Agent HAS Lead ScoringMycelium
index DEFINES, star IMPLEMENTSStandards defines, Industry appliesFACT-Star architecture
IntentTrace SEALS loop_iterationOne verified run of any feedback loopStandard Data Unit

IntentTrace — the canonical data unit for a verified loop iteration: { intent_ref → action_hash → outcome → delta → verified }. The smallest record you could put a token on. All data sources (receipts, plan completions, commits, graph health) map to this shape. See Standard Data Unit.

The data footprint is the ontology made operational. Every industry is a data model. Understanding the data model is understanding the domain.

The Knowledge Schema describes how schemas influence attention and learning. Ontology is the schema made machine-readable — so agents can query and traverse the concept graph without human mediation.


Benchmark Context

Naming is measurement infrastructure.

Naming Quality SignalPerformance Impact
Canonical term is used consistentlyMetrics and dashboards aggregate correctly
One concept has one nameTrend analysis is comparable over time
Prefix and taxonomy are stableSearch and retrieval remain reliable
Ambiguous aliases are removedLess false confidence in performance reporting

Without naming standards, benchmarks fragment and the scoreboard lies.


Change Log

DateChangeReason
2026-05Added Utility Files (Templates) convention (_{slug}-template.md)Intent-clear folder naming: artifact-specs/ for SOPs, templates/ for stencils
2026-05Added Discovery Prompts naming convention (dp-*)Single standard for all prompt files with versioning
2026-02Added three-layer framework (Taxonomy, Nomenclature, Ontology)Distinguish structure, labels, and semantics
2025-12Adopted 5P folder structure standardUniversal schema for all domains
2025-12Renamed marketing-activities to marketing-protocolsAlign with 5P framework
2025-12Renamed marketing-team to marketing-playersAccommodate human + AI players
2024-12Established operations terminology hierarchyStandardize process documentation
2024-12Created naming standards documentSingle source of truth for terminology

Context

Questions

If naming is measurement infrastructure, what are you unable to measure because you haven't named it yet?

  • Where in your system do two different names point at the same concept — and which one is lying?
  • When does a practice harden into a workflow and a workflow into a standard — what evidence triggers the promotion?
  • If an agent can only reason over what's in the ontology, what's invisible to your agents right now because it has no formal name?
  • What's the cost of a naming inconsistency that persists for six months before anyone notices?