AI Problems
Are we giving away the agency to manifest our own destiny?
The AI problem isn't technical. It's that solving it requires the one thing humans are wired to avoid — thinking slowly about uncomfortable truths.
The Meta-Problem
Every AI problem assumes humans will engage with it rationally. They won't.
System 1 — fast, lazy, comfortable — dominates. People don't want hard decisions. They want to feel that what they're doing is ok.
| Solution | Why It Fails Without the Inner Loop |
|---|---|
| Regulate AI | People won't engage with what they don't understand |
| Align AI | Profit-first thinking dominates safety in every boardroom |
| International cooperation | Requires trust between nations wired for competition |
| Public awareness | System 1 rejects information that requires hard thinking |
The machines will handle the outer loops. The question is whether humans will train the muscle to engage — or look away until it's too late.
The Attention Loop
Synthetic data is a positive feedback loop with no setpoint.
AI generates content → captures eyeballs → attention data trains AI →
AI generates more of what captured attention → loop tightens
The setpoint isn't truth. It isn't value. It's engagement. And engagement selects for what's easy to consume, not what's worth consuming.
| PID Term | What It Should Do | What It Actually Does |
|---|---|---|
| P (present) | Correct toward quality | Amplifies whatever got clicks today |
| I (history) | Accumulated wisdom | Accumulated bias — every past click reinforces the pattern |
| D (trend) | Anticipate where attention is heading | Accelerates toward lowest-effort content |
This is the microphone pointed at a speaker. No negative feedback. No correction. Just amplification.
| Stage | What Happens | Who Benefits |
|---|---|---|
| Generate | AI produces synthetic content at scale | Platforms (volume) |
| Capture | Engagement-optimized content wins attention | Advertisers (eyeballs) |
| Train | Attention data becomes training signal | Model builders (data) |
| Amplify | Next generation optimizes harder for engagement | Nobody (quality degrades) |
The ethical problem isn't that AI generates content. It's that the feedback signal is attention, not value. A control system optimizing for the wrong setpoint still converges — it just converges toward slop.
The fix is the same as any engineering problem: introduce negative feedback. Measure against standards that represent genuine value, not engagement proxies. Blockchain can provide verification — provenance, attribution, quality attestation — but only if the standards are right at genesis.
The Void
You should really pause and reflect on the fact that many companies now are saying what we want to do is build AGI — AI that is as good as humans.
OK, what does it look like? What does a good society look like when we have humans and we have trillions of AI beings going around that are functionally much more capable?
And it's like... there's nothing. We're careening towards this vision that is just a void.
The void exists because nobody is doing the inner loop work. The questions aren't being asked. The systems aren't being built.
The Diagnostic
| Question | Current State | What Changes It |
|---|---|---|
| Who decides AI's direction? | Lab CEOs, investors | Governance beyond shareholders |
| What's the vision? | Void | Questions asked publicly |
| How do people engage? | They don't (System 1) | Systems that make engagement default |
| Where's accountability? | Corporate self-regulation | Standards with teeth |
| What do humans do for purpose? | Nobody's answering | Flow + capability |
Inner vs Outer
The outer loop problems — regulation, alignment, coordination — are real. But they're downstream of the same inner loop failure.
| Outer Loop (Technology) | Inner Loop (Human) |
|---|---|
| Align AI systems | Align your own intention |
| Regulate AI development | Develop your own capability |
| Build AI governance | Build your own prediction model |
| Ensure AI transparency | Practice your own honesty |
You can't solve the outer loop without the inner loop. And the inner loop is trainable.
Links
- Will MacAskill: Century in a Decade — The void
- MIT AI Risk Library — Risk taxonomy
- Technology is Not Values Neutral — Consillience Project
Context
- Trust Architecture — Structure over intention
- The Meta-Problem — Same tools deceive and verify
- Problems — The metacrisis
- Questions — Converting problems into progress
- Flow State — What humans do for purpose
- Control System — Engineering what willpower tries
- Agent Commerce — When AI handles money