Investing Team
Can a team of specialized AI agents outperform a solo human investor?
DeFi maths not as hard as you think.
Agent Roles
Each agent has a defined job, specific data sources, and clear output format.
| Role | Job | Data Sources | Output |
|---|---|---|---|
| Valuation Agent | Calculate intrinsic value, flag mispricing | On-chain metrics, DeFi Llama, token terminal | Fair value estimate + confidence band |
| Sentiment Agent | Gauge market mood, detect narrative shifts | Social media, funding rates, fear/greed index | Sentiment score (-1 to +1) + trend direction |
| Fundamentals Agent | Analyze protocol health and growth | Revenue, TVL, active users, developer commits | Health scorecard + growth trajectory |
| Technicals Agent | Identify entry/exit timing signals | Price action, volume, on-chain flow data | Signal (buy/sell/hold) + strength rating |
| Risk Manager | Set position limits, monitor exposure | Portfolio composition, correlation matrix, VaR | Max position size + portfolio risk score |
| Portfolio Manager | Make final decisions, generate orders | All agent outputs + human-set constraints | Trade orders with reasoning |
Interaction Model
Signal aggregation, not majority vote:
- Each agent publishes a signal with confidence level
- Portfolio Manager weighs signals by historical accuracy, not equally
- Conflicting signals trigger deeper analysis, not paralysis
- Risk Manager has veto power — can block any trade that violates portfolio constraints
Conflict resolution:
- When Valuation says "cheap" but Sentiment says "fear" — that's opportunity. Size appropriately
- When Technicals says "buy" but Fundamentals says "deteriorating" — that's a trap. Pass
- When all agents agree — that's either genuine conviction or a crowded trade. Check contrarian indicators
Failure Modes
| Agent | Failure Mode | Symptom | Fix |
|---|---|---|---|
| Valuation | Stale model assumptions | Fair value hasn't updated despite market regime change | Force model refresh on macro trigger events |
| Sentiment | Echo chamber data | All sources agree because they read each other | Add contrarian sources, weight primary data higher |
| Fundamentals | Lagging indicators | Metrics report past, not future | Weight leading indicators (dev activity, governance proposals) |
| Technicals | Overfitting | Perfect backtests, poor live performance | Out-of-sample validation, reduce indicator count |
| Risk Manager | Over-conservative | Blocks every trade, portfolio goes stale | Dynamic risk limits based on volatility regime |
| Portfolio Manager | Decision paralysis | Conflicting signals cause inaction | Time-box decisions, default to smallest safe action |
Human Override
The human investor remains the outer loop. Override the agent when:
- Macro regime change — Agents trained on historical data can't anticipate unprecedented events (new regulation, protocol exploit, geopolitical shock)
- Ethical boundary — Agent recommends a position in a project you don't trust or that conflicts with your values
- Information asymmetry — You have private context (upcoming partnership, insider insight) that agents don't
- System failure — Agent outputs are contradictory, stale, or obviously wrong. Shut down and diagnose before trading
Rule: If you override more than 30% of agent recommendations in a month, either the agents need retraining or you need to trust the system more. Track which is right.
Trading Stack
- Execution: Jupiter APIs for token swapping, Meteora/Orca for LP positions
- Data: Helius for transaction management, Switchboard oracles for market analysis
- Speed: Jito Bundles for reliable execution, Helius RPC for priority fees
- Inference: Kuzco for on-chain inference payments
Context
- Risk Management — The controls the Risk Manager enforces
- Investment Research — Data sources the agents consume
- AI Agents Thesis — The investment case for agent protocols
- Portfolio Management — Operating rhythm the team plugs into
Links
- AI Hedge Fund — Open source reference architecture (GitHub)
- Soleng Design — Solana agent engineering
- Solana Trading Bot — Tutorial walkthrough
- Formulas of Uniswap — The math behind AMMs
Questions
If a team of specialized agents can process more data and act faster than a solo investor, what unique value does the human bring to the loop?
- When agents disagree, is the Portfolio Manager actually resolving the conflict or just picking the signal it agrees with?
- What happens when all agents are trained on the same historical data and a genuinely novel event occurs?
- At what portfolio size does the cost of running an agent team justify itself over manual management?