Value Stories
The meta-language for data. Each story validates that the data footprint instrument reads, scores, maps, and connects — turning 267 silent tables into ranked, actionable intelligence.
Can the instrument read?
Does Run Introspection populate every table with objective facts?
Operator clicks Run Introspection on /admin/data-footprint. Last run was 12 days ago. All 267 tables show N/A for score, structure, relations, and CRUD.
Populate record counts, column counts, FK graphs, and relationship data for every table in one run.
All 267 tables show column count, record count, FK count within 60s of run completion. Zero N/A rows remain in the table list. Last run timestamp updates.
Button says complete while tables still show N/A. Partial run populates 50 tables and silently stops. Introspection writes to meta_database_introspection but meta_table_documentation rows never created.
Operator opens a table detail panel for planning_projects. Sees No introspection data available. No column list, no FK graph, no schema drift indicator.
See full column analysis, foreign key relationships, and schema drift detection for any table.
Detail panel shows all columns with data types. FK references list referenced and referencing tables. Schema drift shows mismatches between DB state and .schema.ts definition if any exist.
Column list renders but FK references are empty even when FKs exist. Schema drift always shows no drift even when columns were added or removed post-migration. Detail panel shows stale data from a previous introspection run.
Can I score what matters?
Can operators score maturity and classify on-chain potential?
Domain owner reviews 47 venture tables. All show metaScore 0.0/5. No way to distinguish high-value tables from low-value. No scoring UI exists.
Score tables on 3 dimensions (schema maturity, in-situ docs, data completeness) plus overall metaScore so high-value tables surface to the top.
After scoring, tables sort by metaScore descending. Domain average updates in domain chip. Top-scored tables identifiable in under 30 seconds by scanning the sorted list.
All tables get the same score because the scoring UI defaults to 3/5. Scores save but sort order doesn't change. Domain average doesn't update when individual table scores change. Scoring requires page reload to see results.
Strategist needs to know which tables have on-chain potential for Walrus/Sui storage. onchainPotential is false on all 267 tables. No criteria exist for what qualifies.
Classify tables by on-chain suitability with clear criteria — identity, trust, attestation, lineage — so the team knows which data benefits from immutable decentralized storage.
onchainPotential toggleable per table with reason captured. Filter view shows only on-chain candidates. Assessment criteria visible: identity (portable), trust (tamper-proof), attestation (proof of capability), lineage (provenance trail).
Toggle exists but no classification criteria. Every table marked potential because criteria are vague. No filter — must scroll through 267 tables to find candidates. Reason not captured so classification is unauditable.
What does this data feed?
Which work charts and ventures does each table enable?
Operator asks which work charts does agent_profiles feed. outcomeEnablement field is null on all 267 tables. No mapping exists between tables and BOaaS operations.
Map each table to the JTBD, work chart, or BOaaS operation it enables so operators know why each table matters to the business.
outcomeEnablement shows linked work charts with phase specificity (e.g. business-venture: SCAN phase, content-pipeline: author identification). insightsOut shows what reports or decisions consume this data.
Field populated with generic text (supports operations) rather than specific work chart references. No structured selector — free text only so mappings are inconsistent. Links not clickable.
Venture founder asks which tables does my venture need activated. No connection between ventures and data tables exists. Cannot tell what data is blocking venture progress.
See which ventures demand which tables, and whether those tables are ready (data + CRUD) or blocking (schema only).
Venture filter shows tables demanded by selected venture. Tables with recordCount > 0 AND hasCrudInterface = true marked ready. Tables with schema only marked blocking. Count of ready vs blocking visible.
Venture filter shows all 267 tables regardless of venture selection. No distinction between ready and blocking — all tables look the same. Filter exists but venture list is empty.
Is the pipeline flowing?
Do ETL runs and CRUD detection write back to the instrument?
Operator runs Load Business Ideas ETL pipeline from /admin/etl-pipelines. Pipeline completes successfully. Data footprint page shows no change — pipeline and footprint are disconnected.
ETL pipeline execution automatically updates pipelineIn, pipelineReliability, and dataFreshness on the tables it touched.
After pipeline run, affected tables show last refreshed timestamp, reliability percentage, and pipeline name. Dashboard With CRUD UI count or equivalent pipeline count updates from 0 to actual.
Pipeline runs but does not write to meta_table_documentation. Data footprint page and ETL pipeline page remain disconnected. Pipeline writes to a log file instead of the database.
7 tables have admin CRUD UI in the engineering repo but hasCrudInterface shows false for all 267 in meta_table_documentation. Interface detection is disconnected from the scoring instrument.
CRUD, API, and agent interface detection writes coverage flags to meta_table_documentation so the dashboard shows accurate coverage.
With CRUD UI count updates from 0 to actual count (7+). With Agent Interface count likewise. Coverage percentage visible per domain. Numbers match actual engineering repo state.
Detection runs but writes to a separate report file instead of the database. Dashboard still shows 0 CRUD. Detection counts UI pages but misidentifies which tables they serve.
Build Contract
9 build rows across 4 jobs. Each row maps to a feature ID and a story.
| # | Job | Feature | State |
|---|---|---|---|
| B1 | Introspection Engine | DATA-009 | Partial |
| B2 | Introspection Engine | DATA-009 | Gap |
| B3 | Introspection Engine | DATA-009 | Partial |
| B4 | Scoring Instrument | DATA-010 | Gap |
| B5 | Scoring Instrument | DATA-010 | Gap |
| B6 | JTBD Mapping | DATA-011 | Gap |
| B7 | JTBD Mapping | DATA-011 | Gap |
| B8 | Pipeline Integration | DATA-012 | Gap |
| B9 | Pipeline Integration | DATA-012 | Gap |