Questions Are the Moat
Information is now free. Answers cost nothing. The only thing left to compete on is the quality of what you ask.
The Equalization
AI has made two things free: the ability to build products, and access to answers. Every business now has the same tools, the same models, the same knowledge base. Competitive advantage built on information — on knowing something your competitor doesn't — is structurally over.
What doesn't equalize is accumulated understanding. The business that has spent two years asking sharp questions about their customers' pain knows something no model trained on public data can replicate. That understanding was built question by question. It lives in context, not in code.
Distribution becomes the moat when building is free — but distribution is not the mechanism. It is the outcome. What drives distribution is trust. And trust is built by demonstrating you understand someone's pain better than they can articulate it themselves. That understanding comes from questions.
What Accumulates
When a business asks better questions — of their customers, their strategy, their AI outputs — something specific happens. The question becomes context. Context produces better output. Better output builds trust. Trust earns distribution. Distribution creates more opportunity to ask better questions.
This is a virtuous loop. It compounds. The setpoint is the quality of the question. The gauge is the quality of what the answer reveals. The controller is the discipline to ask again, with what you learned.
Two businesses with identical tools will diverge based entirely on whose questions are sharper. The tools are table stakes. The questions are the moat.
The Test
Before any AI investment, any meeting, any product launch: ask the question without using the word "AI."
If the pain disappears when you remove the word, there was no pain. If the problem sharpens — becomes more specific, more urgent, more connected to something that already costs time or money — you have found the question worth answering.
This is the first move in any decision worth recording: name the business problem in plain language. Without the tool in the name. Everything else is the answer. The question is the moat.
Where to Run the Practice
In meetings. The AI strategy meeting exists because the moment a tech advisor chairs the meeting, the questions shift from business pain to solution preference. The protocol gives the questions back to the owner. "What process hurts?" precedes "which tool fixes it?" by at least one meeting. Remove the owner's questions and the meeting becomes a sales pitch.
In products. Every AI product is a feedback loop: define what good looks like, generate output, measure the gap. The gap is a question — five of them, specifically. The CRAFT checklist asks Correctness, Reliability, Alignment, Failsafe, and Trust before anyone writes code. Teams that ask these questions in advance have a definition of done. Teams that don't are shipping and hoping.
In strategy. Most AI audits produce a list of tools. The quarterly reviews that compound produce a different artifact: clarity on what to stop. The quarterly review structures the three questions most businesses skip — what are we missing, what should we cut, and which tribe is absent from the conversation? The gaps are the strategy.
In the agency model. The AI-native agency that runs at software margins does so because its context profiles compound. Each client review, each quality pass, each editorial judgment is a question asked well and recorded. That context cannot be downloaded. It is built over time. It is the moat.
Context
- VVFL Loop — The loop that questions power: setpoint → gauge → controller → compound
- AI Strategy Meeting — Five phases that keep the business owner's questions in charge
- AI Evaluation — CRAFT — Five questions that make output quality testable before a line of code is written
- AI Priorities Review — The quarterly sequence for asking what to stop, not just what to start
- AI-Native Agency — The model where better questions become context profiles at software margins
- Decision Journal — Where good questions and their answers go so they compound across time
- Culture Is the Moat — Why the long-run outcome of a question practice is culture, not just competitive advantage
Links
- Karpathy — Distribution Builds a Moat — When anyone can build, distribution is what remains
- Karpathy — What I Think About LLMs — How the quality of the question shapes what the model produces
Questions
When AI gives every competitor the same answers, what is your question practice — and does it compound?
- The CRAFT checklist asks five questions before shipping — how many of your current AI products have a written answer to all five, and what does that number reveal about your definition of done?
- If you removed the word "AI" from your next strategy meeting agenda, which agenda items would survive — and what does that tell you about what you're actually solving for?
- Which of the three tribes — Explorer, Automator, Validator — is missing from your quarterly AI review, and which questions are going unasked because of that gap?
- What context has your business accumulated about customer pain that no model trained on public data could replicate — and where is it written down so it outlasts the person who discovered it?