Skip to main content

AI Problems

Are we giving away the agency to manifest our own destiny?

The AI problem isn't technical. It's that solving it requires the one thing humans are wired to avoid — thinking slowly about uncomfortable truths.

The Meta-Problem

Every AI problem assumes humans will engage with it rationally. They won't.

System 1 — fast, lazy, comfortable — dominates. People don't want hard decisions. They want to feel that what they're doing is ok.

SolutionWhy It Fails Without the Inner Loop
Regulate AIPeople won't engage with what they don't understand
Align AIProfit-first thinking dominates safety in every boardroom
International cooperationRequires trust between nations wired for competition
Public awarenessSystem 1 rejects information that requires hard thinking

The machines will handle the outer loops. The question is whether humans will train the muscle to engage — or look away until it's too late.

The Attention Loop

Synthetic data is a positive feedback loop with no setpoint.

AI generates content → captures eyeballs → attention data trains AI →
AI generates more of what captured attention → loop tightens

The setpoint isn't truth. It isn't value. It's engagement. And engagement selects for what's easy to consume, not what's worth consuming.

PID TermWhat It Should DoWhat It Actually Does
P (present)Correct toward qualityAmplifies whatever got clicks today
I (history)Accumulated wisdomAccumulated bias — every past click reinforces the pattern
D (trend)Anticipate where attention is headingAccelerates toward lowest-effort content

This is the microphone pointed at a speaker. No negative feedback. No correction. Just amplification.

StageWhat HappensWho Benefits
GenerateAI produces synthetic content at scalePlatforms (volume)
CaptureEngagement-optimized content wins attentionAdvertisers (eyeballs)
TrainAttention data becomes training signalModel builders (data)
AmplifyNext generation optimizes harder for engagementNobody (quality degrades)

The ethical problem isn't that AI generates content. It's that the feedback signal is attention, not value. A control system optimizing for the wrong setpoint still converges — it just converges toward slop.

The fix is the same as any engineering problem: introduce negative feedback. Measure against standards that represent genuine value, not engagement proxies. Blockchain can provide verification — provenance, attribution, quality attestation — but only if the standards are right at genesis.

The Void

You should really pause and reflect on the fact that many companies now are saying what we want to do is build AGI — AI that is as good as humans.

OK, what does it look like? What does a good society look like when we have humans and we have trillions of AI beings going around that are functionally much more capable?

And it's like... there's nothing. We're careening towards this vision that is just a void.

Will MacAskill

The void exists because nobody is doing the inner loop work. The questions aren't being asked. The systems aren't being built.

The Diagnostic

QuestionCurrent StateWhat Changes It
Who decides AI's direction?Lab CEOs, investorsGovernance beyond shareholders
What's the vision?VoidQuestions asked publicly
How do people engage?They don't (System 1)Systems that make engagement default
Where's accountability?Corporate self-regulationStandards with teeth
What do humans do for purpose?Nobody's answeringFlow + capability

Inner vs Outer

The outer loop problems — regulation, alignment, coordination — are real. But they're downstream of the same inner loop failure.

Outer Loop (Technology)Inner Loop (Human)
Align AI systemsAlign your own intention
Regulate AI developmentDevelop your own capability
Build AI governanceBuild your own prediction model
Ensure AI transparencyPractice your own honesty

You can't solve the outer loop without the inner loop. And the inner loop is trainable.

Context