Skip to main content

Repository Patterns

What separates a trustworthy repository from one that lies about its own safety?

The repository standards define the 10 rules. The scorecard grades them. This page captures the patterns and anti-patterns discovered through production review — the lessons that make the next repository better than the last.

Patterns

Soft-Delete Consistency

Every read path, count path, and summary path excludes deleted rows by default. No exceptions.

MethodPolicy
findByIdWHERE id = ? AND deletedAt IS NULL
findManyWHERE ... AND deletedAt IS NULL
countWHERE ... AND deletedAt IS NULL
deleteSET deletedAt = now() — soft delete
bulkDeleteSET deletedAt = now() — same policy as single delete
purgeHard delete — separate method, explicit name, audit logged
restoreSET deletedAt = NULL — only exists when soft-delete is on

Anti-pattern: delete() soft-deletes but bulkDelete() hard-deletes. This is a behavioral trap — one path assumes recoverability while the other permanently removes rows.

Sort Whitelisting

Dynamic sorting uses an explicit map. Never index arbitrary table keys.

const sortableColumns = {
createdAt: table.createdAt,
updatedAt: table.updatedAt,
name: table.name,
status: table.status,
} as const;

type SortField = keyof typeof sortableColumns;

Reject unknown fields with a validation error. Do not cast to PgColumn.

Anti-pattern: table[sort.field as keyof typeof table] — this trusts user input to name valid columns and produces runtime failures or invalid SQL on bad input.

True Upserts

Use database conflict handling. Never check-then-create.

// Correct — atomic, race-safe
await db
.insert(table)
.values(data)
.onConflictDoUpdate({
target: [table.organisationId, table.name],
set: { ...data, updatedAt: new Date() },
});

Anti-pattern: findByNameAndOrg then create — not atomic, races under concurrent requests, produces duplicates unless the database has a unique constraint AND you catch the conflict.

Prerequisite: Unique index on the business key (organisation_id, name).

Honest Headers

Repository file headers must match reality.

ClaimOnly Valid When
"Zero any casts"Zero any anywhere in the file, including .prepare()
"Optimized"Benchmarked with p95 evidence against SLOs
"Type-safe"No unknown as, no as PgColumn, no cast chains
"Soft-delete aware"Every read/count/summary excludes deleted rows

Anti-pattern: Header says "zero any casts" but prepared statements and dynamic query builders use any. This is type-safety theater.

Error Contract

Error helper signatures must be consistent across every call site.

// Define once
error(code: string, message: string): RepoError

// Use consistently
this.error('NOT_FOUND', 'AgentProfile not found');
this.error('CREATE_FAILED', 'Failed to create agent profile');

Anti-pattern: Swapping code and message arguments between call sites. This makes the error contract unreliable and breaks error handling downstream.

Prepared Statement Scope

Prepare only fixed-shape hot-path reads. Four per repository is typical.

PrepareSkip
findByIdfindAll (dynamic filters, sorting, paging)
findBySystemUserIdbulkUpdate (variable row count)
findByNameAndOrgDynamic search with optional clauses
countByOrgAny method where query shape varies by caller

Include soft-delete filtering IN the prepared SQL, not in post-fetch application code. The prepared statement should encode the full access rule.

Mapper Discipline

One mapper per entity shape. Row-to-domain mapping happens in exactly one place.

ShapeMapperWhere
Full entitymapToDomainRepository private method
Summary/listmapToSummaryRepository private method
Join resultmapJoinResultRepository private method

Anti-pattern: Selecting a custom row shape from a join, then casting the result back to the base Row type. Define the join result type explicitly.

Generator Checklist

The Nx generator encodes policy. It should make inconsistent repositories impossible to scaffold.

Required Inputs

InputWhy
entityNameDrives file, class, method, and type naming
tableImportAnchors repo to a single schema source
domainTypeGenerates full entity mapper
summaryTypeGenerates partial-select mapper
softDeleteDetermines read filters and delete method behavior
tenantColumnEnsures all queries support tenancy constraints
preparedLookupsAllows only specific stable methods to be prepared
jsonColumnsForces explicit parsing/validation at boundary
uniqueKeysGenerates true upsert with onConflict

Quality Gates

The generator should fail when:

  • any appears in generated repository code
  • Dynamic order-by lacks a whitelist map
  • Soft-delete enabled but summary/count methods miss deletedAt IS NULL
  • Upsert requested without a declared unique key
  • Prepared statement requested for a dynamic filter method
  • Write methods generated without db | tx executor support
  • Hard delete generated in a soft-delete repository

Generated Tests

Every scaffolded repo ships with tests for:

  • create persists and maps correctly
  • findById excludes soft-deleted rows
  • findAll and count return consistent totals under same filters
  • delete and bulkDelete follow the same policy
  • restore works only when soft-delete is enabled
  • Upsert is idempotent under repeated execution
  • Transaction tests prove methods work with tx

Tooling

Existing

Two audit scripts in tools/scripts/platform-engineering/:

# 10-point rubric (R01-R10) — static consistency scoring
npx tsx tools/scripts/platform-engineering/repo-quality-audit.ts --all
npx tsx tools/scripts/platform-engineering/repo-quality-audit.ts --repo=agent-profile
npx tsx tools/scripts/platform-engineering/repo-quality-audit.ts --all --format=table

# Compliance matrix — file structure, interface, testing, patterns, docs
npx tsx tools/scripts/platform-engineering/audit-data-repositories.ts --all

The 10-point rubric checks: soft-delete reads (R01), soft-delete writes (R02), multi-tenant isolation (R03), batch operations (R04), prepared statements (R05), error handling (R06), partial selects (R07), transaction wrapping (R08), organisationId validation (R09), domain mapping (R10).

Planned: Scorecard CLI

Combines static consistency with runtime benchmarks and baseline tracking.

# Full scorecard — consistency + runtime + type integrity
npx tsx tools/scripts/platform-engineering/repo-scorecard.ts --all

# Single repo, machine-readable
npx tsx tools/scripts/platform-engineering/repo-scorecard.ts --repo=agent-profile --format=json

# Save baseline for trend tracking
npx tsx tools/scripts/platform-engineering/repo-scorecard.ts --all --save-baseline

# Compare against saved baseline — catch drift
npx tsx tools/scripts/platform-engineering/repo-scorecard.ts --all --compare-baseline=baselines/latest.json

Scorecard output per repo: consistency score (10-rule pass rate), runtime p50/p95/p99 against SLOs, any count, unsafe cast count, soft-delete violations, overall weighted grade. The --compare-baseline flag shows delta per metric so regressions are visible in review.

Nx Generator

# Generate a new repository with all policies encoded
pnpm nx g ./tools:data-repository-optimized \
--name=agent-collaboration-events \
--tableName=agentCollaborationEvents

Generates: domain entity + DTOs + repository interface + Drizzle implementation + integration test. Five optimization flags (all default on): prepared statements, batch operations, partial selects, error mapping, type safety.

73 repositories currently generated. Reference implementation: agent-profile (926 lines).

Review Grading

Production review rubric from the AgentProfile audit.

DimensionScore RangeWhat It Measures
Production quality/10Correctness, consistency, safety under load
Effort and intent/10Structure, coverage, domain awareness

AgentProfile baseline: 6.5/10 production, 8/10 intent. Gap = the patterns above.

StrongWeak
Consistent repository shapeSoft-delete policy inconsistent across methods
Explicit row/domain mappingany in prepared statements and dynamic builders
Domain-specific lookups (findByExternalAuthId)Fake upsert (check-then-create, not atomic)
Soft-delete awareness in some read pathsError helper arguments swapped between call sites

Context

Questions

If the generator encodes policy and the scorecard grades compliance, what role does code review serve — catching what the tools missed, or validating what the tools can't measure?

  • When a repository scores 8/10 on intent but 6.5/10 on production quality, which number predicts the next incident?
  • At what point does the generator replace the patterns page — and is that the goal?
  • Which anti-pattern on this page has cost you the most time in production, and does the generator prevent it yet?