Skip to main content

Value Stories

The meta-language for data. Each story validates that the data footprint instrument reads, scores, maps, and connects — turning 267 silent tables into ranked, actionable intelligence.

ActionHuman-initiatedCronScheduledHookEvent-triggered

Can the instrument read?

Does Run Introspection populate every table with objective facts?

S1Action
When

Operator clicks Run Introspection on /admin/data-footprint. Last run was 12 days ago. All 267 tables show N/A for score, structure, relations, and CRUD.

I need to

Populate record counts, column counts, FK graphs, and relationship data for every table in one run.

So I get

All 267 tables show column count, record count, FK count within 60s of run completion. Zero N/A rows remain in the table list. Last run timestamp updates.

Not

Button says complete while tables still show N/A. Partial run populates 50 tables and silently stops. Introspection writes to meta_database_introspection but meta_table_documentation rows never created.

S2Action
When

Operator opens a table detail panel for planning_projects. Sees No introspection data available. No column list, no FK graph, no schema drift indicator.

I need to

See full column analysis, foreign key relationships, and schema drift detection for any table.

So I get

Detail panel shows all columns with data types. FK references list referenced and referencing tables. Schema drift shows mismatches between DB state and .schema.ts definition if any exist.

Not

Column list renders but FK references are empty even when FKs exist. Schema drift always shows no drift even when columns were added or removed post-migration. Detail panel shows stale data from a previous introspection run.

Can I score what matters?

Can operators score maturity and classify on-chain potential?

S3Action
When

Domain owner reviews 47 venture tables. All show metaScore 0.0/5. No way to distinguish high-value tables from low-value. No scoring UI exists.

I need to

Score tables on 3 dimensions (schema maturity, in-situ docs, data completeness) plus overall metaScore so high-value tables surface to the top.

So I get

After scoring, tables sort by metaScore descending. Domain average updates in domain chip. Top-scored tables identifiable in under 30 seconds by scanning the sorted list.

Not

All tables get the same score because the scoring UI defaults to 3/5. Scores save but sort order doesn't change. Domain average doesn't update when individual table scores change. Scoring requires page reload to see results.

S4Action
When

Strategist needs to know which tables have on-chain potential for Walrus/Sui storage. onchainPotential is false on all 267 tables. No criteria exist for what qualifies.

I need to

Classify tables by on-chain suitability with clear criteria — identity, trust, attestation, lineage — so the team knows which data benefits from immutable decentralized storage.

So I get

onchainPotential toggleable per table with reason captured. Filter view shows only on-chain candidates. Assessment criteria visible: identity (portable), trust (tamper-proof), attestation (proof of capability), lineage (provenance trail).

Not

Toggle exists but no classification criteria. Every table marked potential because criteria are vague. No filter — must scroll through 267 tables to find candidates. Reason not captured so classification is unauditable.

What does this data feed?

Which work charts and ventures does each table enable?

S5Action
When

Operator asks which work charts does agent_profiles feed. outcomeEnablement field is null on all 267 tables. No mapping exists between tables and BOaaS operations.

I need to

Map each table to the JTBD, work chart, or BOaaS operation it enables so operators know why each table matters to the business.

So I get

outcomeEnablement shows linked work charts with phase specificity (e.g. business-venture: SCAN phase, content-pipeline: author identification). insightsOut shows what reports or decisions consume this data.

Not

Field populated with generic text (supports operations) rather than specific work chart references. No structured selector — free text only so mappings are inconsistent. Links not clickable.

S6Action
When

Venture founder asks which tables does my venture need activated. No connection between ventures and data tables exists. Cannot tell what data is blocking venture progress.

I need to

See which ventures demand which tables, and whether those tables are ready (data + CRUD) or blocking (schema only).

So I get

Venture filter shows tables demanded by selected venture. Tables with recordCount > 0 AND hasCrudInterface = true marked ready. Tables with schema only marked blocking. Count of ready vs blocking visible.

Not

Venture filter shows all 267 tables regardless of venture selection. No distinction between ready and blocking — all tables look the same. Filter exists but venture list is empty.

Is the pipeline flowing?

Do ETL runs and CRUD detection write back to the instrument?

S7Hook
When

Operator runs Load Business Ideas ETL pipeline from /admin/etl-pipelines. Pipeline completes successfully. Data footprint page shows no change — pipeline and footprint are disconnected.

I need to

ETL pipeline execution automatically updates pipelineIn, pipelineReliability, and dataFreshness on the tables it touched.

So I get

After pipeline run, affected tables show last refreshed timestamp, reliability percentage, and pipeline name. Dashboard With CRUD UI count or equivalent pipeline count updates from 0 to actual.

Not

Pipeline runs but does not write to meta_table_documentation. Data footprint page and ETL pipeline page remain disconnected. Pipeline writes to a log file instead of the database.

S8Cron
When

7 tables have admin CRUD UI in the engineering repo but hasCrudInterface shows false for all 267 in meta_table_documentation. Interface detection is disconnected from the scoring instrument.

I need to

CRUD, API, and agent interface detection writes coverage flags to meta_table_documentation so the dashboard shows accurate coverage.

So I get

With CRUD UI count updates from 0 to actual count (7+). With Agent Interface count likewise. Coverage percentage visible per domain. Numbers match actual engineering repo state.

Not

Detection runs but writes to a separate report file instead of the database. Dashboard still shows 0 CRUD. Detection counts UI pages but misidentifies which tables they serve.

Build Contract

9 build rows across 4 jobs. Each row maps to a feature ID and a story.

#JobFeatureState
B1Introspection EngineDATA-009Partial
B2Introspection EngineDATA-009Gap
B3Introspection EngineDATA-009Partial
B4Scoring InstrumentDATA-010Gap
B5Scoring InstrumentDATA-010Gap
B6JTBD MappingDATA-011Gap
B7JTBD MappingDATA-011Gap
B8Pipeline IntegrationDATA-012Gap
B9Pipeline IntegrationDATA-012Gap