Skip to main content

Design System Proving Ground

Can your engineer find THE form standard in 30 seconds?

A split screen. Left: a design system homepage with 404 errors, raw HTML patterns, and three competing "Forms" sections. Right: the same site with a single "Forms" entry, a standard process chain, and a meeting agenda form built from existing atoms. The gap between museum and proving ground.

Problem

A problem well-stated is a problem 80% solved

Situation: An engineer arrives at stackmat.es to build a form. Forms appear in 3 sidebar locations. The /forms CTA is dead. CRM pattern pages use raw HTML (L:R 1:15). "Coming Soon" stubs ship as complete. The form atoms are healthy (L:R 1.4:1, TanStack integration solid, accessibility thorough) — but the standard process that connects them is invisible.

Intention: One unified forms section with THE standard process (Zod -> Server Action -> FormShell -> Name Attrs -> TanStack), a shipping checklist, lib-only CRM patterns, and a meeting agenda form that proves composition handles real domain complexity.

Obstacle: Only 1 organism (FormShell) exists — composition depth is unproven. CRM pages violate the Prime Directive they're supposed to showcase. Navigation structure evolved organically into 3 competing locations.

Hardest Thing: Proving that the lib can compose real domain forms without needing local components. If the meeting agenda form requires a _components/ directory, the lib has fundamental gaps that no amount of documentation fixes.

Priorities

  1. What's broken that everyone sees? Forms 404, CRM raw HTML, stubs as content
  2. What's the standard nobody can find? Zod -> Server Action -> FormShell -> Name Attrs -> TanStack
  3. What proves the system works? CRM rewrite at L:R 1:3
  4. What stress-tests the system? Meeting agenda form from existing atoms only
  5. What prevents regression? 10-point shipping checklist

Progress

  • Pictures — Outcome + dependency maps
  • PRD Spec — Intent, stories, build, screen, navigation contracts
  • Prompt Deck — 5-card pitch

Scorecard

Priority Score: 576 (Pain 4 x Demand 3 x Edge 4 x Trend 4 x Conversion 3)

#Priority (should we?)Preparedness (can we?)
1Pain: 4 — dogfood found 10 issues. /forms 404. CRM L:R 1:15. 3 nav locations.Principles: 4 — Prime Directive clear. Three-mode architecture defined (IMPORT -> COPY -> USE).
2Demand: 3 — internal builders use forms daily. Pattern seekers have shadcn. No external signal.Performance: 2 — form atoms pass (L:R 1.4:1). No composition metrics. No shipping gate.
3Edge: 4 — three-mode architecture novel. Anti-pattern docs rare. Meeting form as stress test unique.Platform: 3 — atoms built, TanStack solid, 1 organism. CRM patterns broken.
4Trend: 4 — design systems mainstream. AI code gen needs machine-readable standards.Protocols: 2 — no standard process documented. No shipping checklist.
5Conversion: 3 — ~60% built. Atoms healthy. Missing nav, standard process, CRM rewrite.Players: 2 — internal only. No external users or contributors.
MetricTargetNow
Form nav locations13
CRM pattern L:R ratio1:31:15
Organisms in lib3+1

Kill signal: If teams don't use meeting instrument within 30 days of ship, demote to Explorations. If MeetingAgenda requires >3 new lib atoms, stop and fill lib first. Kill date: 2026-06-30.

Context

  • Meetings PRD — Defines the meeting workflow this PRD stress-tests form patterns for
  • Prompt Deck PRD — Design system already showcases PromptDeck, complementary
  • Sales CRM PRD — CRM pattern rewrite (DS-04) improves CRM form showcase
  • Standards — Where proven form patterns graduate to
  • Smart Contracts — Standard fittings thesis: the form chain IS a standard fitting

Questions

What's the real cost of a design system that showcases violations of its own rules?

  • If the CRM patterns bypass every lib atom, what message does that send to the next engineer who visits?
  • Is the meeting agenda form a genuine stress test or a rigged demo — what domain complexity would actually break the system?
  • At what point does "proving ground" become "proving what we already believe" instead of "discovering what fails"?