Skip to main content

Tech Review Process

Which gap in the feature matrix justifies this tool — and can you prove it?

A tool evaluated in isolation is a toy. Anchored to a feature-matrix gap with scored evidence, it becomes a decision. This process closes the loop: gap → candidates → verdict → updated matrix.

Five Gates

Every tech review runs five gates in order. No skipping.

#GateJobOutput
0ANCHORFind the L0 gap first. State the JTBD. State the workaround cost.Feature ID(s), JTBD statement, workaround cost
1SOURCEFind 3–5 candidates using sources matched to the type. Record where each came from.Candidate list with source URL
2CLASSIFYSoftware vs Hardware vs Hybrid. Routes to the correct scoring path.Classification + scoring path
3SCOREApply 5-dimension scoring matrix. Each 1–5 with quoted evidence.Score table + geometric mean
4VERDICTMap composite to Buy / Build / Hybrid / Defer.Verdict + rationale
5UPDATEWrite verdict back to the feature matrix. Save receipt.Updated docs + receipt

Gate 0 is a hard stop. No sourcing without a named feature ID.

Software Scoring

Five dimensions. Geometric mean. Score each 1–5 with evidence — "good fit" fails.

DimensionScore 1Score 3Score 5
JTBD FitCovers <30% of jobCovers core, gaps in edge casesCovers 100% including edge cases
Data SovereigntyVendor owns all data, no exportExport available, some lock-inWe own all data, open formats, no lock-in
Team CapabilityRequires skills we don't haveSome upskilling neededTeam can run it today
TCO (3yr)>$50K/yr$10–50K/yr<$10K/yr
Build CostBuild would take <2 weeksBuild would take 1–3 monthsBuild would take >3 months

Verdict thresholds:

CompositeData SovereigntyVerdict
≥3.5≥3Buy
≥3.5<3Build
2.5–3.5AnyHybrid — pilot before committing
<2.5AnyDefer

Hardware / DePIN Scoring

Five dimensions. Geometric mean. Never buy without demand side ≥ 2.

DimensionScore 1Score 3Score 5
Operator ROI>36mo payback24–36mo payback<24mo payback
Network Health<100 nodes100–1,000 nodes>1,000 nodes, geographic spread
Token SustainabilityUnlimited emissions, no demandDecreasing emissions, no demand driverDecreasing emissions + usage-driven demand
Demand SideNo identifiable buyerPilot buyers onlyEnterprise buyers + growing demand
ExitHardware worthless if network failsSome resale valueStrong secondary market

Verdict thresholds: ≥3.5 → buy. <3.0 → defer.

Sourcing by Type

TypePrimary SourcesSecondary Sources
SaaS / AI-nativeProduct Hunt, G2, a16z/YC portfolios, PerplexityCompetitor stack pages, LinkedIn job ads
OSSGitHub trending, CNCF landscape, Hacker NewsAwesome lists, pkg.go.dev, npmjs
DePIN devicesDePINscan.io, DePINhub.ioProtocol Discord/Twitter, node sale announcements
RoboticsNVIDIA Isaac ecosystem, Boston Dynamics partners, FrodoBots networkarXiv embodied AI, YC robotics batch
AI computeio.net, Akash, Render, AethirGPU node announcements, Twitter lists

Hardware Strategy

Hardware strategy is simple: buy. We don't manufacture DePIN devices or robots — we operate them.

The evaluation question is: does this device earn its cost in a reasonable time, on a network that won't collapse?

Apply the five hardware dimensions above. If any of these are true, stop:

  • Demand side < 2 — no buyer for the data/service
  • Network health < 1 — launch-phase protocol, unproven
  • Token emissions still climbing — the reward schedule is inflationary

See DePIN devices for the current hardware inventory.

Software Strategy

Software strategy: engineer what we need, buy commodity.

The distinction is data sovereignty. If the tool touches data that trains our competitive advantage, we own it. If it's a commodity pipe, rent is fine.

The build cost dimension accounts for AI-assisted development. A feature that took months in 2022 takes weeks now. This shifts more tools into "build" territory — update scores quarterly as capability improves.

See Buy or Build for the full sovereignty framework.

Closing the Loop

Every verdict feeds back:

VerdictAction
BuyTool added, feature state updated to L1 (Stub)
BuildPRD created, feature state stays L0 until PRD ships
DeferDocumented with trigger criteria for re-evaluation

Receipts accumulate in .ai/receipts/. Run quarterly per category, or on-demand when a venture activates a new vertical.

Context

Questions

What makes a tech review different from a product demo — and what evidence would a demo never show you?

  • If Gate 0 requires a named feature ID before sourcing, what does that imply about the order of discovery — market-first or gap-first?
  • When build cost drops because AI coding improves, which previously-bought tools cross the threshold into "build" territory first?
  • A tool scores 4.0 on JTBD fit but 1.5 on data sovereignty — what does the verdict tell you that the feature score doesn't?
  • Which hardware dimension is most likely to be faked in a protocol's marketing materials — and how would you verify it independently?