Agent Platform
What makes an agent more than a prompt with memory?
Identity. Communication. Direction. Quality. Without all four, you have a chatbot that forgets. With all four, you have a team member that compounds.
The Job
When the system needs agents that persist across sessions, coordinate across teams, receive directed work, and improve over time, help it operate agents as first-class participants — so every agent instance (Sales Dev, Content Amplifier, Orchestrator) inherits the same platform and only adds domain knowledge.
| Trigger Event | Current Failure | Desired Progress |
|---|---|---|
| New session starts | 5-10 minutes re-exploring what happened | Under 30 seconds to productive via memory + message history |
| Agent needs to remember | Every session starts from zero | 4 memory types persist: working, semantic, procedural, episodic |
| Two agents work on same product | Communicate by merging code | Structured messages on typed channels |
| Priority table updated | Human manually starts engineering session | Dispatch message routes work to correct team |
| Agent blocked | Sets status, waits for next human audit | Block signal broadcasts, help arrives in minutes |
| Plan complete | Engineer says "done," product owner inspects manually | Commissioning agent verifies independently against PRD |
| New agent type needed | Build from scratch, bespoke integration | Clone base pattern, add domain knowledge, load via drmg agent load |
| Enforcement degrades silently | Nobody knows which rules are stale or which generators fail | VVFL audit measures 8 dimensions, learns patterns across cycles |
Pitch-Prompt Deck
Five cards. Five headlines. Five pictures. The meme layer — 80 cents in the dollar.
| Card | Headline | Persuasion | Picture | Problem → Question → Decision |
|---|---|---|---|---|
| Principles | One platform, every agent | Ethos | Outcome Map | 5 PRDs described one system from 5 angles → Where does engineering start? → One platform: identity, memory, comms, dispatch, quality |
| Performance | Ten minutes lost each session | Logos | Value Stream | Agents start from zero every session → How much does forgotten context cost? → 80% recovery time reduction with memory + messages |
| Platform | Everything built, nothing unified | Topos | Dependency Map | Three CLIs, 23 templates, 16 agents — no coordination → What unifies them? → drmg thin router: one entry point, same handlers |
| Protocols | CLI today, protocol tomorrow | Kairos | Capability Map | Agents trapped on one machine → What opens the mesh? → Same 8 message types, A2A protocol wrapper |
| Players | Agents coordinate by merging code | Pathos | A&ID | Four agents on same product communicate by archaeology → What replaces it? → Structured messages on typed channels |
Why This, Why Now
Five separate PRDs described one system from five angles. Engineering would read them and ask: "Where do I start? Which one is the agent?" The answer was all of them and none of them. That's a specification problem, not an engineering problem.
One platform. Five concerns. Each concern is a section, not a separate spec.
What Already Exists
| Component | Location | Status |
|---|---|---|
| Convex schema + functions | libs/infrastructure/convex/src/ | Deployed, 8 queries + 3 mutations |
| Agent comms CLI | tools/scripts/comms/agent-comms.ts | Proven, 5 channels, 8 message types |
| Plan CLI | tools/scripts/orch-meta/planning/plan-cli.ts | DB-native, 32 commands, 2842 lines |
| Agent ETL CLI | tools/scripts/etl/agent-etl-cli.ts | Working, 3 agents loaded |
| Data interface CLI | tools/scripts/orch-meta/data-interface/data-interface-cli.ts | Working |
| 23 plan templates | .claude/orchestration/team-operations/*/_PLANS/ | Active across 5 worktree teams |
| 5 Convex channels | meta, ui, intel, mkt, all | Seeded |
| 8 message types | status, handoff, blocker, decision, complete, question, context, system | Validated |
| 16 agent definitions | .claude/orchestration/_meta/_CONTROL/AGENTS.md | Defined, 3 loaded to prod |
| Cognitive architecture | 4 memory types, 8 JSON files per agent | Orchestrator + Knowledge Architect + Test Engineer loaded |
| WorkChart orchestrators | libs/agency/src/lib/orchestrators/ | Sequential, Concurrent, Adaptive, Group-Chat |
| A2A orchestrator | Same path | Cost tracking, budget enforcement, agent selection |
| Context graph tables | meta_connections_entities, meta_connections_relationships | Schema live, no writers |
| Outcome measurements | outcome_measurements table | Schema live, no writers |
| Semantic memory | agent_memory_semantic table | Live, 38 entries |
The Agent Pattern
Every agent in the system is an instance of a base pattern. The platform manages the lifecycle. Each instance carries domain-specific knowledge while operating within the same framework.
Base Pattern (Every Agent Has)
| Layer | What | Files |
|---|---|---|
| Identity | Name, type, mindset, personality, drivers, capabilities | profile.json, character.json, drivers.json, capabilities.json |
| Memory | Working (4-7 slots), Semantic (S-P-O triples), Procedural (step sequences), Episodic (session records) | working-memory.json, semantic-memory.json, procedural-memory.json, episodic-memory.json |
| Work Chart | Human/AI split defining what the agent handles vs what humans handle | Defined in agent's PRD |
| Protocols | MCP (tool use), A2A (agent handoff), ACP (commerce) | Engineering wiring |
| Commissioning | Independent verification against PRD criteria | Platform-level VVFL audit |
Instance Examples
| Instance | Domain Knowledge | A2A Handoffs |
|---|---|---|
| Sales Dev Agent | Construction/solar ICP, sales playbook | Sales Dev → RFP Agent at Proposal stage |
| Content Amplifier | Content strategy, channel optimization | Content → Social Media at distribution |
| Orchestrator | Team routing, plan management | Orchestrator → any team agent |
Memory Lifecycle
Working Memory (4-7 slots, loaded first each session)
↓ consolidation
Semantic Memory (proven facts, concepts, principles)
Procedural Memory (proven skills, steps, success rates)
↓ or
Decay (unrefined, not reinforced)
Episodic Memory (session records, last 10, emotional tagging)
Five Concerns
1. Identity and Memory
How agents persist knowledge across sessions.
The problem: Every session starts from zero. Decisions made last week are revisited this week. That's a compounding tax.
| # | Feature | Function | Outcome | State |
|---|---|---|---|---|
| 1 | Agent profile loading | Load 8 JSON files from any path to Supabase | Any agent identity enters the system | Partial (ETL works, hardcoded paths) |
| 2 | Flexible path flag | --path flag on ETL CLI for any filesystem location | Unblocks all future agent profiles | Gap (~20 lines) |
| 3 | Session bootstrap | Load relevant memories at session start | Agent productive in under 30 seconds | Gap |
| 4 | Session extract | Capture decisions and learnings to persistent storage | Knowledge compounds across sessions | Gap |
| 5 | Semantic recall | Query past decisions by meaning | "What did I decide about auth?" returns structured answer | Gap (API exists) |
| 6 | Procedural recall | Retrieve how-to knowledge from past sessions | Repeatable processes don't require rediscovery | Gap (API exists) |
| 7 | Drift detection | Flag contradictory decisions across sessions | Consistency without manual tracking | Gap |
Kill signal: If agents with memory aren't measurably faster than agents without it, the memory is noise.
2. Communication
How agents coordinate in real time.
The problem: Four agents on the same product communicate by merging code. That's archaeology, not coordination.
| # | Feature | Function | Outcome | State |
|---|---|---|---|---|
| 8 | Agent channels | Named communication spaces per plan/team | Messages go to the right audience | Gap (Convex schema exists) |
| 9 | Structured messages | Typed payloads: status, handoff, block, decision, complete | Communication is parseable, not just text | Gap (types defined) |
| 10 | Session recovery via messages | Load recent channel messages on session start | Context survives crashes and restarts | Gap |
| 11 | Block signalling | Agent broadcasts when stuck, with context | Help arrives in minutes, not hours | Gap (message type exists) |
| 12 | Cross-team handoff | Structured message with context transfer | Receiving agent starts with full context | Built (agent-comms proven) |
| 13 | Live activity feed | Real-time view of all agent activity per plan | Orchestrator sees everything without polling | Gap |
Kill signal: If session recovery time doesn't improve within 30 days, comms is adding overhead.
3. Dispatch and Execution
How work gets routed to the right agent with the right plan.
The problem: Human reads priority table, opens terminal, manually starts session. Three CLIs, no coordination.
| # | Feature | Function | Outcome | State |
|---|---|---|---|---|
| 14 | Priority dispatch | Priority table change → Convex message to meta channel | Engineering knows what to build next without reading git | Gap |
| 15 | Plan instantiation from PRD | PRD metadata → plan-cli create with correct template + team | Plans match spec structure, not ad-hoc | Partial (plan-cli exists) |
| 16 | Agent profile per plan | Domain-specific agent loaded for each plan type | Teams have the right mindset and memory for the job | Partial (ETL works) |
| 17 | Plan progress tracking | Status/completion messages update plan state in DB | Single source of truth for progress | Partial (messages work) |
| 18 | Session bootstrap from dispatch | New session loads dispatch + recent messages for team/plan | Agent knows context immediately | Gap |
| 19 | Commissioning dispatch | Dream agent navigates to deployed URL, walks PRD commissioning table, captures GIF/screenshot evidence per feature. Uses browser commissioning protocol. | Builder never validates their own work — proof is operational, not code review | Gap |
Kill signal: If spec-to-ship cycle time doesn't improve within 30 days, orchestration is ceremony.
4. VVFL Enforcement
How the system audits itself and learns.
The problem: 5 generators, 23 templates, 17 rules, 13 skills, 16 agents. Nobody knows which are stale, broken, or unused. Quality degrades silently.
| # | Feature | Function | Outcome | State |
|---|---|---|---|---|
| 20 | Context graph seeder | Scan filesystem, upsert enforcement artifacts as nodes/edges | Map of all artifacts and relationships | Gap |
| 21 | Generator auditor | Measure usage rate, correctness, CLAUDE.md presence | Know which generators work | Gap |
| 22 | Template auditor | Query plan completion rates, estimation accuracy | Know which templates produce results | Gap |
| 23 | Rules auditor | Parse violation logs, check rule↔hook coverage | Know which rules are enforced | Gap |
| 24 | Skills auditor | Detect bloat, validate paths, check quality gates | Know which skills are healthy | Gap |
| 25 | Agents auditor | Check line counts, CLAUDE.md freshness, reference validity | Know which agent definitions are accurate | Gap |
| 26 | Platform auditor | Security alerts, typecheck trends, hook coverage | Know platform health | Gap |
| 27 | Virtue auditor | Read commissioning pass/fail from dream repo | Ground truth — did the output serve? | Gap |
| 28 | Pattern extractor | Detect trends across 3+ runs, extract high-certainty patterns | Institutional knowledge that persists | Gap |
| 29 | Memory writer | Write patterns to semantic memory, runs to episodic | Cross-session learning for the system itself | Gap |
| 30 | Action generator | Critical findings → plan issues routed to owning team | Auto-generated improvement tasks | Gap |
Kill signal: If after 5 audit runs, zero patterns lead to improvements, the measurement is ceremony.
5. Operational Bridge (drmg CLI)
The cross-repo interface. Engineering agents run it to report state. Dream agents read the output to commission outcomes. The CLI is the product being dogfooded — the same operational bridge that customers use for their BOaaS operations.
| # | Feature | Function | Outcome | State |
|---|---|---|---|---|
| 31 | Shared DB context | Single env bootstrap + connection for all modules | No more duplicated DB code | Gap (extract from plan-cli) |
| 32 | Thin router | drmg <module> <command> dispatches to handlers | One entry point for everything | Gap |
| 33 | Plan module | Wrapper around existing plan-cli | drmg plan active works | Gap (pass-through) |
| 34 | Agent module | Load, status, recall, list | drmg agent load --path=<dir> | Gap (absorb agent-etl) |
| 35 | VVFL module | seed-graph, audit, learn, recall, report, act | drmg vvfl audit --dimension=all | Gap |
| 36 | Data module | Wrapper around data-interface-cli | drmg data coverage works | Gap (pass-through) |
| 37 | Priority module | Score PRDs using weighted commissioning formula | drmg priority score ranks PRDs | Gap |
| 38 | Report in L0-L4 format | VVFL report speaks commissioning language | Dream repo consumes VVFL health like any other capability | Gap |
Kill signal: If operators still use the old CLI paths after 30 days, the unified CLI isn't better.
The Read-Only Mesh
Two repos. Three channels. Neither writes to the other's filesystem.
The audience for this PRD is engineering agents. They read it directly from the dream repo at /home/wik/code/drmg-mental-model/. The table order on phygital-mycelium/index.md is a live instruction set — top of table = build next. PRD commissioning tables are pass/fail criteria. This isn't a reference doc. It's the spec interface.
Three Channels
| # | Channel | Transport | Direction | What Crosses |
|---|---|---|---|---|
| 1 | Filesystem reads | Local filesystem | Both ways | Dream agents read code. Engineering agents read PRDs, priorities, commissioning tables. |
| 2 | Convex messages | Convex DB | Both ways | Typed async: status, handoff, blocker, decision, complete, question, context, system. |
| 3 | Supabase tables | Supabase (via drmg CLI) | Engineering writes, both read | Measurements, plans, patterns, agent state. The commissioning loop closes here. |
DREAM REPO (WHY + WHAT) ENGINEERING REPO (HOW)
──────────────────── ────────────────────
PRDs, priorities, commissioning Plans, generators, code, drmg CLI
│ │
│──── READS files ──────────────────→ │ (PRDs, priorities, docs)
│ ←──────────────────── READS files │ (code, architecture)
│ │
│ CANNOT EDIT ───────────── x │
│ x ───────────── CANNOT EDIT │
│ │
│ Channel 1: Filesystem reads │
│ Priority table = build order │
│ PRD commissioning = pass/fail │
│ │
│ Channel 2: Convex messages │
│ 8 typed message types │
│ Async, persistent, per-channel │
│ │
│ Channel 3: Supabase tables │
│ drmg CLI writes measurements │
│ Both repos read state │
│ │
├─ Commissions independently ├─ Ships, reports via drmg CLI
The Loop
Dream team sets priorities (filesystem)
→ Engineering reads PRDs, composes plans
→ Engineering teams execute, report via drmg CLI (Supabase)
→ Dream team reads measurements, commissions outcomes
→ Gap between expectation and outcome drives next priority
→ Loop
The 8 message types are the vocabulary. The drmg CLI is the measurement interface. The filesystem is the spec interface. All three channels serve the same loop — the transport differs, the function is identical: close the gap between what was specified and what was shipped.
How It Gets Smarter
| Cycle | What Happens |
|---|---|
| Run 1 | Baseline measurements. No forecasts. Graph seeded. |
| Run 2 | Compare with Run 1. Set forecasts = previous actuals. |
| Run 3+ | Variance tracking. Persistent patterns → semantic memory. Actions auto-generated. |
| Run N | Recall surfaces "this generator always needs X fix" → fix applied → measurement improves → pattern retired. |
| Cross-agent | drmg agent recall surfaces VVFL patterns alongside agent's own memories. The system teaches its agents. |
The Graduation Path
The drmg CLI is the product being dogfooded. The same agent platform that orchestrates our engineering teams is what customers use for their operations. The graduation path is three steps, each one testable, each one using the same 8 message types and the same Supabase tables. Only the transport layer changes.
Three Steps
| Step | Transport | What Changes | What Stays | Proves |
|---|---|---|---|---|
| CLI (now) | Direct DB writes | drmg CLI → Supabase/Convex | Message types, tables, VVFL enforcement | Agent ops work on one machine |
| API (next) | HTTP routes | drmg CLI → REST API → Supabase/Convex | Same commands, same output | Transport is decoupled from storage |
| A2A (then) | A2A protocol | A2A Task Cards → REST API → Supabase/Convex | Same pipeline, standard protocol | Any agent can join the mesh |
Message Types → A2A Mapping
The 8 message types we use today map directly to A2A Task lifecycle states. No new vocabulary needed — just a protocol wrapper.
| Our Message Type | A2A Equivalent | Function |
|---|---|---|
status | Task status update | Progress reporting |
handoff | tasks/send | Task delegation to another agent |
blocker | Task blocked state | Signals need for help or decision |
decision | Task artifact | Structured decision record |
complete | Task completed | With output artifacts |
question | Task input-required | Triggers human-in-loop |
context | Task metadata | Attached to Task Card |
system | Agent Card update | Capability advertisement |
What Each Step Unlocks
CLI → API: The drmg CLI becomes a thin client. Engineering agents call drmg plan active and it hits /api/planning/active instead of querying Supabase directly. Same command, same output. But now a web dashboard can call the same endpoint. The CLI and the UI share one source of truth.
API → A2A: The API endpoints get Agent Card and Task Card wrappers. An Agent Card advertises: "I can orchestrate engineering plans, run VVFL audits, manage agent profiles." External agents discover capabilities via A2A, send Task Cards, and the same pipeline processes them. The mesh opens from two repos to any participant.
The Dogfooding Proof
| Question | How We Answer It |
|---|---|
| Can agents orchestrate 5 engineering teams? | CLI step — we do this now |
| Can agents do the same via HTTP? | API step — same operations, decoupled transport |
| Can external agents join the mesh? | A2A step — standard protocol, open participation |
Each step uses the same message types, same tables, same VVFL enforcement. If we can successfully orchestrate our own engineering and close the commissioning loop, that's a live demo of Results as a Service. The customer version is the same system pointed at their operations.
Kill signal: If the API step doesn't reduce integration friction vs direct CLI (measured by time-to-first-query for a new consumer), the abstraction layer isn't earning its keep. Ship the CLI as the product instead.
Success Criteria
Functional
| # | Criterion | Target |
|---|---|---|
| F1 | Agent productive after session start | Under 30 seconds (from 5-10 minutes) |
| F2 | Block signal reaches orchestrator | Under 2 minutes |
| F3 | Priority dispatch reaches engineering | Under 2 minutes from table update |
| F4 | VVFL audit produces measurements | All 8 dimensions in --dry-run |
| F5 | Patterns extracted after 3+ runs | Semantic memory entries with certainty > 0.7 |
| F6 | All CLI commands work through drmg | 100% parity with standalone CLIs |
| F7 | API routes serve same data as CLI | 100% parity, response time < 500ms |
| F8 | Agent Card discoverable via A2A | /.well-known/agent.json serves valid Agent Card |
| F9 | External agent completes a task via A2A | Task Card in → artifacts out, no filesystem access needed |
Outcome
| # | Criterion | Threshold |
|---|---|---|
| O1 | Session recovery time | -80% reduction |
| O2 | Duplicate work across teams | -80% reduction |
| O3 | Spec-to-ship cycle time | 2x current rate |
| O4 | VVFL patterns leading to fixes | >= 50% of critical findings resolved |
Commissioning
Identity and Memory
| Component | Schema | API | CLI | Tests | Status |
|---|---|---|---|---|---|
| Agent profile schema | Done | Done | Partial (hardcoded) | Partial | 50% |
| Memory store schema | Done | Done | N/A | Partial | 40% |
| Semantic memory | Done | Done | Pending | Partial | 40% |
| Procedural memory | Done | Done | Pending | Partial | 40% |
| Working memory | Done | Done | Pending | Partial | 40% |
| Episodic memory | Done | Done | Pending | Partial | 40% |
| Flexible path flag (--path) | N/A | Pending | Pending | Pending | 0% |
| Session bootstrap | Done | Partial | N/A | Pending | 25% |
| Session extract | Pending | Pending | N/A | Pending | 0% |
| Drift detection | Pending | Pending | Pending | Pending | 0% |
Communication
| Component | Schema | API | CLI | Tests | Status |
|---|---|---|---|---|---|
| Channel system | Done | Done | Done | Partial | 60% |
| Message types | Done | Done | Done | Partial | 60% |
| Session recovery | Pending | Pending | Pending | Pending | 0% |
| Block signalling | Done | Done | Done | Pending | 50% |
| Cross-team handoff | Done | Done | Done | Partial | 80% |
| Activity feed | Pending | Pending | Pending | Pending | 0% |
Dispatch and Execution
| Component | Schema | API | CLI | Hook | Status |
|---|---|---|---|---|---|
| Priority dispatch | Convex exists | Convex exists | Gap | Gap | 30% |
| PRD → template mapping | N/A | N/A | Gap | N/A | 0% |
| Agent profile per plan | Supabase exists | Exists | ETL works | Gap | 50% |
| Plan progress via messages | Convex exists | Exists | Partial | Gap | 40% |
| Session bootstrap from dispatch | N/A | Convex exists | N/A | Gap | 20% |
| Commissioning dispatch | N/A | N/A | N/A | Gap | 0% |
VVFL Enforcement
| Component | Schema | API | CLI | Tests | Status |
|---|---|---|---|---|---|
| Context graph seeder | Exists | N/A | Gap | N/A | 10% |
| 8 dimension auditors | Exists | N/A | Gap | N/A | 5% each |
| Pattern extractor | Exists | N/A | Gap | N/A | 5% |
| Memory writer | Exists | N/A | Gap | N/A | 10% |
| Action generator | Exists | N/A | Gap | N/A | 5% |
| Report (L0-L4 format) | N/A | N/A | Gap | N/A | 0% |
Unified CLI
| Component | Schema | API | CLI | Tests | Status |
|---|---|---|---|---|---|
| Shared DB context | N/A | N/A | Gap | N/A | 0% |
| Thin router | N/A | N/A | Gap | N/A | 0% |
| Plan wrapper | Exists | Exists | Gap | Exists | 30% |
| Agent module | Exists | Exists | Gap | Partial | 25% |
| VVFL module | Exists | N/A | Gap | N/A | 5% |
| Data wrapper | Exists | Exists | Gap | N/A | 25% |
| Priority module | N/A | N/A | Gap | N/A | 0% |
Build Sequence
Phase 0: Unblock Agent Profiles (0.5 day)
| Task | What | Effort |
|---|---|---|
Add --path flag to agent-etl-cli | ~20 lines. Unblocks ALL future agents. | 0.5 day |
Phase 1: Unified CLI + VVFL MVP (3-4 sessions)
| Task | What | Effort |
|---|---|---|
| Shared DB context | Extract from plan-cli pattern | 0.5 day |
| Thin router | Parse module + dispatch | 0.5 day |
| Graph seeder | Filesystem → context graph | 1 day |
| 8 auditors | One per enforcement dimension | 3 days |
| Audit command with --dry-run | Wire auditors to CLI | 0.5 day |
Phase 2: CLI Wrappers (1-2 sessions)
| Task | What | Effort |
|---|---|---|
| Plan wrapper | Delegate to plan-cli handlers | 0.5 day |
| Agent module | Absorb agent-etl + add status/recall/list | 1 day |
| Data wrapper | Delegate to data-interface-cli | 0.5 day |
| Priority scorer | Weighted formula from commissioning | 1 day |
Phase 3: Communication Wiring (2 sessions)
| Task | What | Effort |
|---|---|---|
| Priority dispatch (dream → engineering) | Priority table → Convex message | 1 day |
| Session bootstrap from messages | Startup hook loads team context | 1 day |
| Session recovery | Load recent messages on crash/restart | 0.5 day |
Phase 4: Learning Engine (2 sessions)
| Task | What | Effort |
|---|---|---|
| Pattern extractor | Cross-run trend detection | 1 day |
| Memory writer | Patterns → semantic, runs → episodic | 1 day |
| Agent recall of VVFL patterns | Shared semantic memory query | 0.5 day |
| Action generator | Critical → plan issues | 1 day |
Phase 5: Commission Loop (1-2 sessions)
| Task | What | Effort |
|---|---|---|
| Virtue auditor | Read commissioning from dream repo | 1 day |
| Commissioning dispatch | Dream agent navigates deployed URL, walks PRD features with browser commissioning protocol, captures evidence | 2 days |
| Report in L0-L4 format | VVFL speaks commissioning language | 0.5 day |
Phase 6: API Transport (3-4 sessions)
| Task | What | Effort |
|---|---|---|
| API route per drmg module | REST endpoints wrapping CLI handlers | 2 days |
| CLI as thin client | drmg calls API routes instead of direct DB | 1 day |
| Auth + rate limiting | API keys, per-agent rate limits | 1 day |
| Web dashboard consumption | Same endpoints serve UI and CLI | 2 days |
Phase 7: A2A Protocol (2-3 sessions)
| Task | What | Effort |
|---|---|---|
| Agent Card definition | Advertise capabilities per agent type | 1 day |
| Task Card wrapper | Map 8 message types to A2A Task lifecycle | 1 day |
| A2A discovery endpoint | /.well-known/agent.json serves capabilities | 0.5 day |
| External agent handoff | Accept Task Cards from agents outside the mesh | 2 days |
Scope Control
| Scope | Phases | Sessions | What You Get |
|---|---|---|---|
| MVP | 0-1 | ~4-5 | Agent profiles unblocked + drmg CLI + 8 VVFL auditors |
| V1 | 0-3 | ~8-10 | MVP + all CLI wrappers + communication wiring + priority scoring |
| Platform | 0-5 | ~12-14 | V1 + learning, actions, virtue audit, commissioning |
| API | 0-6 | ~16-18 | Platform + HTTP transport, CLI as thin client, web dashboard |
| A2A | 0-7 | ~18-22 | API + standard protocol, external agents can join the mesh |
Risks + Kill Signal
| Risk | Mitigation |
|---|---|
| Migration breaks existing CLIs | Old entry points remain as thin wrappers. No breaking changes. |
| VVFL measurements aren't actionable | --dry-run first 3 runs. Review before enabling writes. |
| Over-coordination | Structured message types enforce brevity. Status is one line. |
| Memory bloat slows bootstrap | Relevance scoring: load top-N by recency and semantic match. |
| Virtue auditor can't parse dream repo | Commissioning tables are markdown — simple regex. Fallback: manual. |
Kill signal: If agents on this platform aren't measurably more effective than agents without it (session recovery, cycle time, quality scores), the platform is overhead. Simplify to the 3 most useful features and drop the rest.
DB Tables (All Existing — No New Schema)
| Table | R/W | Concern | What Goes In |
|---|---|---|---|
agent_profiles | R+W | Identity | Agent profiles, capabilities |
agent_memory_working | R+W | Memory | Active cognitive slots |
agent_memory_semantic | R+W | Memory + VVFL | Knowledge triples + VVFL patterns |
agent_memory_procedural | R+W | Memory | Skill sequences |
agent_memory_episodic | R+W | Memory + VVFL | Session records + audit runs |
agent_memory_stores | R+W | Memory | Memory containers per agent |
meta_connections_entities | R+W | VVFL | Context graph nodes |
meta_connections_relationships | R+W | VVFL | Context graph edges |
outcome_measurements | R+W | VVFL | Per-artifact metrics |
orch_meta_plans | R | Dispatch | Plan state, completion rates |
orch_meta_tasks | R | Dispatch | Task estimation accuracy |
priority_problem | R+W | VVFL→Plan | Issue logging |
Mycelium Capability
The Agent Platform is the operating system for every agent in every venture. Identity, memory, communication, dispatch, and quality enforcement — composed from existing infrastructure, unified under the drmg CLI. The CLI is the product being dogfooded: the same operational bridge that orchestrates our engineering teams graduates to API routes, then to A2A protocol, then to customer-facing BOaaS operations.
Sales Dev Agent is the first instance. Content Amplifier is the second. Every future agent inherits the same platform and only adds domain knowledge. Every future customer gets the same platform pointed at their operations.
Currently Growing In: Platform — all ventures.
Context
- Sales Dev Agent — First agent instance, proves the pattern
- Content Amplifier — Second instance, proves portability
- Intelligence Functions — Pure logic agents execute
- Data Interface — Table access agents consume
- Commissioning State Machine — Maturity automation this platform audits
- VVFL Loop — Philosophical framework made operational
- Feedback Loops — Why loops that learn are the most valuable asset
- Phygital Beings — Agent identity documentation
- Agent Protocols — MCP, A2A, ACP stack
- A2A Protocol — The graduation target for cross-repo communication
- Standards — Where VVFL patterns graduate when proven
- Phygital Mycelium — The capability catalogue