Tech Review Process
Which gap in the feature matrix justifies this tool — and can you prove it?
A tool evaluated in isolation is a toy. Anchored to a feature-matrix gap with scored evidence, it becomes a decision. This process closes the loop: gap → candidates → verdict → updated matrix.
Five Gates
Every tech review runs five gates in order. No skipping.
| # | Gate | Job | Output |
|---|---|---|---|
| 0 | ANCHOR | Find the L0 gap first. State the JTBD. State the workaround cost. | Feature ID(s), JTBD statement, workaround cost |
| 1 | SOURCE | Find 3–5 candidates using sources matched to the type. Record where each came from. | Candidate list with source URL |
| 2 | CLASSIFY | Software vs Hardware vs Hybrid. Routes to the correct scoring path. | Classification + scoring path |
| 3 | SCORE | Apply 5-dimension scoring matrix. Each 1–5 with quoted evidence. | Score table + geometric mean |
| 4 | VERDICT | Map composite to Buy / Build / Hybrid / Defer. | Verdict + rationale |
| 5 | UPDATE | Write verdict back to the feature matrix. Save receipt. | Updated docs + receipt |
Gate 0 is a hard stop. No sourcing without a named feature ID.
Software Scoring
Five dimensions. Geometric mean. Score each 1–5 with evidence — "good fit" fails.
| Dimension | Score 1 | Score 3 | Score 5 |
|---|---|---|---|
| JTBD Fit | Covers <30% of job | Covers core, gaps in edge cases | Covers 100% including edge cases |
| Data Sovereignty | Vendor owns all data, no export | Export available, some lock-in | We own all data, open formats, no lock-in |
| Team Capability | Requires skills we don't have | Some upskilling needed | Team can run it today |
| TCO (3yr) | >$50K/yr | $10–50K/yr | <$10K/yr |
| Build Cost | Build would take <2 weeks | Build would take 1–3 months | Build would take >3 months |
Verdict thresholds:
| Composite | Data Sovereignty | Verdict |
|---|---|---|
| ≥3.5 | ≥3 | Buy |
| ≥3.5 | <3 | Build |
| 2.5–3.5 | Any | Hybrid — pilot before committing |
| <2.5 | Any | Defer |
Hardware / DePIN Scoring
Five dimensions. Geometric mean. Never buy without demand side ≥ 2.
| Dimension | Score 1 | Score 3 | Score 5 |
|---|---|---|---|
| Operator ROI | >36mo payback | 24–36mo payback | <24mo payback |
| Network Health | <100 nodes | 100–1,000 nodes | >1,000 nodes, geographic spread |
| Token Sustainability | Unlimited emissions, no demand | Decreasing emissions, no demand driver | Decreasing emissions + usage-driven demand |
| Demand Side | No identifiable buyer | Pilot buyers only | Enterprise buyers + growing demand |
| Exit | Hardware worthless if network fails | Some resale value | Strong secondary market |
Verdict thresholds: ≥3.5 → buy. <3.0 → defer.
Sourcing by Type
| Type | Primary Sources | Secondary Sources |
|---|---|---|
| SaaS / AI-native | Product Hunt, G2, a16z/YC portfolios, Perplexity | Competitor stack pages, LinkedIn job ads |
| OSS | GitHub trending, CNCF landscape, Hacker News | Awesome lists, pkg.go.dev, npmjs |
| DePIN devices | DePINscan.io, DePINhub.io | Protocol Discord/Twitter, node sale announcements |
| Robotics | NVIDIA Isaac ecosystem, Boston Dynamics partners, FrodoBots network | arXiv embodied AI, YC robotics batch |
| AI compute | io.net, Akash, Render, Aethir | GPU node announcements, Twitter lists |
Hardware Strategy
Hardware strategy is simple: buy. We don't manufacture DePIN devices or robots — we operate them.
The evaluation question is: does this device earn its cost in a reasonable time, on a network that won't collapse?
Apply the five hardware dimensions above. If any of these are true, stop:
- Demand side < 2 — no buyer for the data/service
- Network health < 1 — launch-phase protocol, unproven
- Token emissions still climbing — the reward schedule is inflationary
See DePIN devices for the current hardware inventory.
Software Strategy
Software strategy: engineer what we need, buy commodity.
The distinction is data sovereignty. If the tool touches data that trains our competitive advantage, we own it. If it's a commodity pipe, rent is fine.
The build cost dimension accounts for AI-assisted development. A feature that took months in 2022 takes weeks now. This shifts more tools into "build" territory — update scores quarterly as capability improves.
See Buy or Build for the full sovereignty framework.
Closing the Loop
Every verdict feeds back:
| Verdict | Action |
|---|---|
| Buy | Tool added, feature state updated to L1 (Stub) |
| Build | PRD created, feature state stays L0 until PRD ships |
| Defer | Documented with trigger criteria for re-evaluation |
Receipts accumulate in .ai/receipts/. Run quarterly per category, or on-demand when a venture activates a new vertical.
Context
- Feature Matrix — L0 gaps are the starting point for every review
- Buy or Build — Sovereignty framework and applied verdicts
- All RaaS Functions — Feature catalog with IDs and ROI scores
- DePIN Devices — Current hardware inventory and evaluation history
- RFP Process — Procurement lifecycle from need to sign-off
- Horizontal SaaS — JTBD specs with top products per category
Questions
What makes a tech review different from a product demo — and what evidence would a demo never show you?
- If Gate 0 requires a named feature ID before sourcing, what does that imply about the order of discovery — market-first or gap-first?
- When build cost drops because AI coding improves, which previously-bought tools cross the threshold into "build" territory first?
- A tool scores 4.0 on JTBD fit but 1.5 on data sovereignty — what does the verdict tell you that the feature score doesn't?
- Which hardware dimension is most likely to be faked in a protocol's marketing materials — and how would you verify it independently?