Skip to main content

Repository Quality

How do you know repositories are fast AND predictable?

Speed without consistency is chaos. Consistency without speed is stagnation. The scorecard below grades both — runtime performance AND structural discipline — so you catch drift before it compounds.

Three Families

FamilyWhat It MeasuresWhy It Matters
RuntimeQuery latency, transaction duration, DB callsUsers feel this directly
ConsistencyMethod presence, ordering, policy complianceDevelopers feel this during maintenance
Change EffectivenessChurn rate, migration safety, review frictionThe team feels this across quarters

Runtime SLOs

Measure at the repository method level, not the endpoint level. Endpoints aggregate — repositories reveal.

Metricp50 Targetp95 Targetp99 TargetBreach Action
Single-row read< 2ms< 10ms< 50msCheck index coverage
List query (100 row)< 15ms< 50ms< 200msCheck query plan, add pagination
Write (insert/upsert)< 5ms< 25ms< 100msCheck constraint overhead
Transaction block< 20ms< 80ms< 300msReduce scope or split
DB calls per request<= 3<= 5<= 8Consolidate or batch

Consistency Rules

Ten structural checks. Each scores pass (1) or fail (0). Total = consistency percentage.

#RuleChecks
1Method presenceEvery repo has findById, findMany, create, update
2Section orderingType imports, then queries, then mutations, then helpers
3Soft-delete policydeletedAt IS NULL in every read, or hard-delete with audit log
4Sort safetyDefault ORDER BY on every findMany
5Transaction readinessMethods accept db | tx executor parameter
6Prepared statementsHot-path reads use .prepare()
7Error mappingDB errors map to domain result types, not raw throws
8Type safetyZero any in method signatures
9Mapper policyRow-to-entity mapping in one place per entity
10Query builder policyComposable filters, not string concatenation

Weighted Scorecard

DimensionWeightSource
Runtime performance40%p50/p95/p99 against SLOs above
Structural consistency35%10-rule checklist pass rate
Type integrity15%Zero any, zero @ts-ignore in repo layer
Change effectiveness10%Churn rate < 15%, migration passes first time

Grading Tiers

ScoreGradeAction
90-100GoldMaintain, share as reference
80-89AcceptableMinor drift — schedule cleanup
70-79DriftDedicated sprint to remediate
< 70RefactorStop features, fix foundations

Anti-Vanity Rules

TrapWhy It LiesUse Instead
Average latencyHides p99 spikes that users rememberPercentile distribution
Lines of codeConflates complexity with capabilityMethod count + consistency %
Test count500 tests testing nothing useful is worse than 50Mutation score or branch cov
"Zero errors"Suppressed errors still existany count + ts-ignore sum

Analysis Slicing

Cut the scorecard by different axes to find where problems cluster.

SliceReveals
By repositoryWhich entity is the weakest link
By method typeReads fast but writes slow? Or vice versa
By generatorScaffold quality vs hand-written quality
By ageOld repos drifting from current standards

Four Buckets

Every repository falls into one of four states. The bucket determines the fix.

StateRuntimeConsistencyPrescription
Fast + consistentPassPassGold standard — extract patterns
Fast + inconsistentPassFailStructural refactor (low risk)
Consistent + slowFailPassPerformance tuning (index, query)
Slow + inconsistentFailFailRewrite candidate — highest priority

Context

Questions

When a repository scores gold on consistency but fails runtime SLOs, is the structure helping or hiding the problem?

  • Which of the ten consistency rules catches the most violations in your codebase — and does that reveal a training gap or a tooling gap?
  • If you grade repositories by age, does quality degrade linearly or does it cliff at a specific point?
  • What changes when you measure repository quality per entity rather than per service?